This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Installing
Installing and configuring OpenShift Container Platform clusters
Abstract
Chapter 1. OpenShift Container Platform installation overview Copia collegamentoCollegamento copiato negli appunti!
1.1. OpenShift Container Platform installation overview Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain.
These two basic types of OpenShift Container Platform clusters are frequently called installer-provisioned infrastructure clusters and user-provisioned infrastructure clusters.
Both types of clusters have the following characteristics:
- Highly available infrastructure with no single points of failure is available by default
- Administrators maintain control over what updates are applied and when
You use the same installation program to deploy both types of clusters. The main assets generated by the installation program are the Ignition config files for the bootstrap, master, and worker machines. With these three configurations and correctly configured infrastructure, you can start an OpenShift Container Platform cluster.
The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installation. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel. The ultimate target is a running cluster. By meeting dependencies instead of running commands, the installation program is able to recognize and use existing components instead of running the commands to create them again.
The following diagram shows a subset of the installation targets and dependencies:
Figure 1.1. OpenShift Container Platform installation targets and dependencies
After installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. It includes the kubelet, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.
Every control plane machine in an OpenShift Container Platform 4.10 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as an Atomic OSTree repository that is embedded in a container image that is rolled out across the cluster by an Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree. Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, via in-place upgrades that keep the entire platform up-to-date. These in-place updates can reduce the burden on operations teams.
If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.
1.1.1. Installation process Copia collegamentoCollegamento copiato negli appunti!
When you install an OpenShift Container Platform cluster, you download the installation program from the appropriate Infrastructure Provider page on the OpenShift Cluster Manager site. This site manages:
- REST API for accounts
- Registry tokens, which are the pull secrets that you use to obtain the required components
- Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics
In OpenShift Container Platform 4.10, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type.
- For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster.
- If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines.
You use three sets of files during installation: an installation configuration file that is named install-config.yaml, Kubernetes manifests, and Ignition config files for your machine types.
It is possible to modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support.
The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster.
The installation configuration files are all pruned when you run the installation program, so be sure to back up all configuration files that you want to use again.
You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation.
The installation process with installer-provisioned infrastructure
The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster.
You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network.
If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure.
With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.
The installation process with user-provisioned infrastructure
You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.
If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself, including:
- The underlying infrastructure for the control plane and compute machines that make up the cluster
- Load balancers
- Cluster networking, including the DNS records and required subnets
- Storage for the cluster infrastructure and applications
If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster.
Installation process details
Because each machine in the cluster requires information about the cluster when it is provisioned, OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process:
Figure 1.2. Creating the bootstrap, control plane, and compute machines
After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Bootstrapping a cluster involves the following steps:
- The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)
- The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.
- The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)
- The temporary control plane schedules the production control plane to the production control plane machines.
- The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes.
- The temporary control plane shuts down and passes control to the production control plane.
- The bootstrap machine injects OpenShift Container Platform components into the production control plane.
- The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)
- The control plane sets up the compute nodes.
- The control plane installs additional services in the form of a set of Operators.
The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.
1.1.2. Verifying node state after installation Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Container Platform installation completes when the following installation health checks are successful:
- The provisioner can access the OpenShift Container Platform web console.
- All control plane nodes are ready.
- All cluster Operators are available.
After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. It can take some time before all worker nodes report as READY. For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes.
After your installation completes, you can continue to monitor the condition of the nodes in your cluster by using the following steps.
Prerequisites
- The installation program resolves successfully in the terminal.
Procedure
Show the status of all worker nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show the phase of all worker machine nodes:
oc get machines -A
$ oc get machines -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Installation scope
The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.
1.1.3. OpenShift Local overview Copia collegamentoCollegamento copiato negli appunti!
OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications.
Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure.
On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later.
For more information about OpenShift Local, see Red Hat OpenShift Local Overview.
1.2. Supported platforms for OpenShift Container Platform clusters Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can install a cluster that uses installer-provisioned infrastructure on the following platforms:
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- Microsoft Azure
- Microsoft Azure Stack Hub
Red Hat OpenStack Platform (RHOSP) versions 16.1 and 16.2
- The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix.
- IBM Cloud VPC
- Red Hat Virtualization (RHV)
- VMware vSphere
- VMware Cloud (VMC) on AWS
- Alibaba Cloud
- Bare metal
For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat.
After installation, the following changes are not supported:
- Mixing cloud provider platforms
- Mixing cloud provider components, such as using a persistent storage framework from a differing platform than what the cluster is installed on
In OpenShift Container Platform 4.10, you can install a cluster that uses user-provisioned infrastructure on the following platforms:
- AWS
- Azure
- Azure Stack Hub
- GCP
- RHOSP versions 16.1 and 16.2
- RHV
- VMware vSphere
- VMware Cloud on AWS
- Bare metal
- IBM Z or LinuxONE
- IBM Power
Depending on the supported cases for the platform, installations on user-provisioned infrastructure allow you to run machines with full internet access, place your cluster behind a proxy, or perform a restricted network installation. In a restricted network installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a restricted network installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access.
The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms.
Chapter 2. Selecting a cluster installation method and preparing it for users Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, decide what kind of installation process to follow and make sure you that you have all of the required resources to prepare the cluster for users.
2.1. Selecting a cluster installation type Copia collegamentoCollegamento copiato negli appunti!
Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option.
2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? Copia collegamentoCollegamento copiato negli appunti!
If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms:
- Alibaba Cloud
- Amazon Web Services (AWS) on 64-bit x86 instances
- Amazon Web Services (AWS) on 64-bit ARM instances
- Microsoft Azure
- Microsoft Azure Stack Hub
- Google Cloud Platform (GCP)
- Red Hat OpenStack Platform (RHOSP)
- Red Hat Virtualization (RHV)
- IBM Cloud VPC
- IBM Z and LinuxONE
- IBM Z and LinuxONE for Red Hat Enterprise Linux (RHEL) KVM
- IBM Power
- VMware vSphere
- VMware Cloud (VMC) on AWS
- Bare metal or other platform agnostic infrastructure
You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same datacenter or cloud hosting service.
If you want to use OpenShift Container Platform but do not want to manage the cluster yourself, you have several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated or OpenShift Online. You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud VPC, or Google Cloud. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported.
2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? Copia collegamentoCollegamento copiato negli appunti!
If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines as part of the installation process for OpenShift Container Platform. See Differences between OpenShift Container Platform 3 and 4.
Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see Migrating from OpenShift Container Platform 3 to 4 overview. Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster.
2.1.3. Do you want to use existing components in your cluster? Copia collegamentoCollegamento copiato negli appunti!
Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs.
You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to Alibaba Cloud, AWS, Azure, Azure Stack Hub, GCP, or VMC on AWS. These installation methods are the fastest way to deploy a production-capable OpenShift Container Platform cluster.
If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for Alibaba Cloud, AWS, Azure, GCP, or VMC on AWS.
For installer-provisioned infrastructure installations, you can use an existing VPC in AWS, vNet in Azure, or VPC in GCP. You can also reuse part of your networking infrastructure so that your cluster in AWS, Azure, GCP, or VMC on AWS can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them.
You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for RHOSP, RHOSP with Kuryr, RHOSP on SR-IOV, RHV, vSphere, and bare metal. Additionally, for vSphere, VMC on AWS, you can also customize additional network parameters during installation.
If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS, Azure, Azure Stack Hub, GCP, or VMC on AWS, you can use the provided templates to help you stand up all of the required components. You can also reuse a shared VPC on GCP. Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds.
You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP, RHOSP on SR-IOV, RHV, IBM Z or LinuxONE, IBM Z or LinuxONE with RHEL KVM, IBM Power, or vSphere, use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. For some of these platforms, such as RHOSP, vSphere, VMC on AWS, and bare metal, you can also customize additional network parameters during installation.
2.1.4. Do you need extra security for your cluster? Copia collegamentoCollegamento copiato negli appunti!
If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure.
If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS, Azure, or GCP.
If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for AWS, GCP, IBM Z or LinuxONE, IBM Z or LinuxONE with RHEL KVM, IBM Power, vSphere, VMC on AWS, or bare metal. You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS, GCP, VMC on AWS, RHOSP, RHV, and vSphere.
If you need to deploy your cluster to an AWS GovCloud region, AWS China region, or Azure government region, you can configure those custom regions during an installer-provisioned infrastructure installation.
You can also configure the cluster machines to use FIPS Validated / Modules in Process cryptographic libraries during installation.
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.
2.2. Preparing your cluster for users after installation Copia collegamentoCollegamento copiato negli appunti!
Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider.
For a production cluster, you must configure the following integrations:
2.3. Preparing your cluster for workloads Copia collegamentoCollegamento copiato negli appunti!
Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy, you might need to make provisions for low-latency workloads or to protect sensitive workloads. You can also configure monitoring for application workloads.
If you plan to run Windows workloads, you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed.
2.4. Supported installation methods for different platforms Copia collegamentoCollegamento copiato negli appunti!
You can perform different types of installations on different platforms.
Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section.
| Alibaba | AWS (64-bit x86) | AWS (64-bit ARM) | Azure | Azure Stack Hub | GCP | RHOSP | RHOSP on SR-IOV | RHV | Bare metal | vSphere | VMC | IBM Cloud VPC | IBM Z | IBM Power | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Default | |||||||||||||||
| Custom | |||||||||||||||
| Network customization | |||||||||||||||
| Restricted network | |||||||||||||||
| Private clusters | |||||||||||||||
| Existing virtual private networks | |||||||||||||||
| Government regions | |||||||||||||||
| Secret regions | |||||||||||||||
| China regions |
| Alibaba | AWS | Azure | Azure Stack Hub | GCP | RHOSP | RHOSP on SR-IOV | RHV | Bare metal (64-bit x86) | Bare metal (64-bit ARM) | vSphere | VMC | IBM Cloud VPC | IBM Z | IBM Z with RHEL KVM | IBM Power | Platform agnostic | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Custom | |||||||||||||||||
| Network customization | |||||||||||||||||
| Restricted network | |||||||||||||||||
| Shared VPC hosted outside of cluster project |
Chapter 3. Disconnected installation mirroring Copia collegamentoCollegamento copiato negli appunti!
3.1. About disconnected installation mirroring Copia collegamentoCollegamento copiato negli appunti!
You can use a mirror registry to ensure that your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring.
3.1.1. Creating a mirror registry Copia collegamentoCollegamento copiato negli appunti!
If you already have a container image registry, such as Red Hat Quay, you can use it as your mirror registry. If you do not already have a registry, you can create a mirror registry using the mirror registry for Red Hat OpenShift.
3.1.2. Mirroring images for a disconnected installation Copia collegamentoCollegamento copiato negli appunti!
You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry:
3.2. Creating a mirror registry with mirror registry for Red Hat OpenShift Copia collegamentoCollegamento copiato negli appunti!
The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations.
If you already have a container image registry, such as Red Hat Quay, you can skip this section and go straight to Mirroring the OpenShift Container Platform image repository.
3.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- An OpenShift Container Platform subscription.
- Red Hat Enterprise Linux (RHEL) 8 and 9 with Podman 3.3 and OpenSSL installed.
- Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server.
- Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys.
- 2 or more vCPUs.
- 8 GB of RAM.
About 12 GB for OpenShift Container Platform 4.10 release images, or about 358 GB for OpenShift Container Platform 4.10 release images and OpenShift Container Platform 4.10 Red Hat Operator images. Up to 1 TB per stream or more is suggested.
ImportantThese requirements are based on local testing results with only release images and Operator images. Storage requirements can vary based on your organization’s needs. You might require more space, for example, when you mirror multiple z-streams. You can use standard Red Hat Quay functionality or the proper API callout to remove unnecessary images and free up space.
3.2.2. Mirror registry for Red Hat OpenShift introduction Copia collegamentoCollegamento copiato negli appunti!
For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page.
The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started.
The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments.
The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift. Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift.
Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters.
Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment.
3.2.3. Mirroring on a local host with mirror registry for Red Hat OpenShift Copia collegamentoCollegamento copiato negli appunti!
This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images.
Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a $HOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine.
Procedure
-
Download the
mirror-registry.tar.gzpackage for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the
mirror-registrytool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags"../mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name>
$ ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the user name and password generated during installation to log into the registry by running the following command:
podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false
$ podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can avoid running
--tls-verify=falseby configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information.
NoteYou can also log in by accessing the UI at
https://<host.example.com>:8443after installation.You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document.
NoteIf there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage.
3.2.4. Updating mirror registry for Red Hat OpenShift from a local host Copia collegamentoCollegamento copiato negli appunti!
This procedure explains how to update the mirror registry for Red Hat OpenShift from a local host using the upgrade command. Updating to the latest version ensures new features, bug fixes, and security vulnerability fixes.
When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process.
Prerequisites
- You have installed the mirror registry for Red Hat OpenShift on a local host.
Procedure
If you are upgrading the mirror registry for Red Hat OpenShift from 1.2.z → 1.3.0, and your installation directory is the default at
/etc/quay-install, you can enter the following command:sudo ./mirror-registry upgrade -v
$ sudo ./mirror-registry upgrade -vCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
mirror registry for Red Hat OpenShift migrates Podman volumes for Quay storage, Postgres data, and
/etc/quay-installdata to the new$HOME/quay-installlocation. This allows you to use mirror registry for Red Hat OpenShift without the--quayRootflag during future upgrades. -
Users who upgrade mirror registry for Red Hat OpenShift with the
./mirror-registry upgrade -vflag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with--quayHostname <host_example_com>and--quayRoot <example_directory_name>, you must include that string to properly upgrade the mirror registry.
-
mirror registry for Red Hat OpenShift migrates Podman volumes for Quay storage, Postgres data, and
If you are upgrading the mirror registry for Red Hat OpenShift from 1.2.z → 1.3.0 and you used a specified directory in your 1.2.z deployment, you must pass in the new
--pgStorage`and `--quayStorageflags. For example:sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --pgStorage <example_directory_name>/pg-data --quayStorage <example_directory_name>/quay-storage -v
$ sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --pgStorage <example_directory_name>/pg-data --quayStorage <example_directory_name>/quay-storage -vCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.5. Mirroring on a remote host with mirror registry for Red Hat OpenShift Copia collegamentoCollegamento copiato negli appunti!
This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images.
Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a $HOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine.
Procedure
-
Download the
mirror-registry.tar.gzpackage for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the
mirror-registrytool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags".Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the user name and password generated during installation to log into the mirror registry by running the following command:
podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false
$ podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can avoid running
--tls-verify=falseby configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information.
NoteYou can also log in by accessing the UI at
https://<host.example.com>:8443after installation.You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document.
NoteIf there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage.
3.2.6. Updating mirror registry for Red Hat OpenShift from a remote host Copia collegamentoCollegamento copiato negli appunti!
This procedure explains how to update the mirror registry for Red Hat OpenShift from a remote host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes.
When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process.
Prerequisites
- You have installed the mirror registry for Red Hat OpenShift on a remote host.
Procedure
To upgrade the mirror registry for Red Hat OpenShift from a remote host, enter the following command:
./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key
$ ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUsers who upgrade the mirror registry for Red Hat OpenShift with the
./mirror-registry upgrade -vflag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with--quayHostname <host_example_com>and--quayRoot <example_directory_name>, you must include that string to properly upgrade the mirror registry.
3.2.7. Uninstalling the mirror registry for Red Hat OpenShift Copia collegamentoCollegamento copiato negli appunti!
You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command:
./mirror-registry uninstall -v \ --quayRoot <example_directory_name>
$ ./mirror-registry uninstall -v \ --quayRoot <example_directory_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use
--autoApproveto skip this prompt. -
Users who install the mirror registry for Red Hat OpenShift with the
--quayRootflag must include the--quayRootflag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with--quayRoot example_directory_name, you must include that string to properly uninstall the mirror registry.
-
Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use
3.2.8. Mirror registry for Red Hat OpenShift flags Copia collegamentoCollegamento copiato negli appunti!
The following flags are available for the mirror registry for Red Hat OpenShift:
| Flags | Description |
|---|---|
|
|
A boolean value that disables interactive prompts. If set to |
|
| The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. |
|
|
Shows the username of the initial user. Defaults to |
|
| Allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. |
|
|
The folder where Postgres persistent storage data is saved. Defaults to the |
|
|
The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to |
|
|
The folder where Quay persistent storage data is saved. Defaults to the |
|
|
The directory where container image layer and configuration data is saved, including |
|
|
The path of your SSH identity key. Defaults to |
|
|
The path to the SSL/TLS public key / certificate. Defaults to |
|
|
Skips the check for the certificate hostname against the |
|
|
The path to the SSL/TLS private key used for HTTPS communication. Defaults to |
|
|
The hostname of the target you want to install Quay to. Defaults to |
|
|
The user on the target host which will be used for SSH. Defaults to |
|
| Shows debug logs and Ansible playbook outputs. |
|
| Shows the version for the mirror registry for Red Hat OpenShift. |
-
--quayHostnamemust be modified if the public DNS name of your system is different from the local hostname. Additionally, the--quayHostnameflag does not support installation with an IP address. Installation with a hostname is required. -
--sslCheckSkipis used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation.
3.2.9. Mirror registry for Red Hat OpenShift release notes Copia collegamentoCollegamento copiato negli appunti!
The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations.
These release notes track the development of the mirror registry for Red Hat OpenShift in OpenShift Container Platform.
For an overview of the mirror registry for Red Hat OpenShift, see Creating a mirror registry with mirror registry for Red Hat OpenShift.
3.2.9.1. Mirror registry for Red Hat OpenShift 1.3.8 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-08-16
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.11.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.2. Mirror registry for Red Hat OpenShift 1.3.7 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-07-19
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.10.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.3. Mirror registry for Red Hat OpenShift 1.3.6 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-05-30
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.8.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.4. Mirror registry for Red Hat OpenShift 1.3.5 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-05-18
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.7.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.5. Mirror registry for Red Hat OpenShift 1.3.4 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-04-25
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.6.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.6. Mirror registry for Red Hat OpenShift 1.3.3 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-04-05
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.5.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.7. Mirror registry for Red Hat OpenShift 1.3.2 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-03-21
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.4.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.8. Mirror registry for Red Hat OpenShift 1.3.1 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-03-7
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.3.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.9. Mirror registry for Red Hat OpenShift 1.3.0 Copia collegamentoCollegamento copiato negli appunti!
Issued: 2023-02-20
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.1.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.9.1. New features Copia collegamentoCollegamento copiato negli appunti!
- Mirror registry for Red Hat OpenShift is now supported on Red Hat Enterprise Linux (RHEL) 9 installations.
IPv6 support is now available on mirror registry for Red Hat OpenShift local host installations.
IPv6 is currently unsupported on mirror registry for Red Hat OpenShift remote host installations.
-
A new feature flag,
--quayStorage, has been added. With this flag, users with root privileges can manually set the location of their Quay persistent storage. -
A new feature flag,
--pgStorage, has been added. With this flag, users with root privileges can manually set the location of their Postgres persistent storage. Previously, users were required to have root privileges (
sudo) to install mirror registry for Red Hat OpenShift. With this update,sudois no longer required to install mirror registry for Red Hat OpenShift.When mirror registry for Red Hat OpenShift was installed with
sudo, an/etc/quay-installdirectory that contained installation files, local storage, and the configuration bundle was created. With the removal of thesudorequirement, installation files and the configuration bundle are now installed to$HOME/quay-install. Local storage, for example Postgres and Quay, are now stored in named volumes automatically created by Podman.To override the default directories that these files are stored in, you can use the command line arguments for mirror registry for Red Hat OpenShift. For more information about mirror registry for Red Hat OpenShift command line arguments, see "Mirror registry for Red Hat OpenShift flags".
3.2.9.9.2. Bug fixes Copia collegamentoCollegamento copiato negli appunti!
-
Previously, the following error could be returned when attempting to uninstall mirror registry for Red Hat OpenShift:
["Error: no container with name or ID \"quay-postgres\" found: no such container"], "stdout": "", "stdout_lines": []*. With this update, the order that mirror registry for Red Hat OpenShift services are stopped and uninstalled have been changed so that the error no longer occurs when uninstalling mirror registry for Red Hat OpenShift. For more information, see PROJQUAY-4629.
3.2.9.10. Mirror registry for Red Hat OpenShift 1.2.9 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.10.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.11. Mirror registry for Red Hat OpenShift 1.2.8 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.9.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.12. Mirror registry for Red Hat OpenShift 1.2.7 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.8.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.12.1. Bug fixes Copia collegamentoCollegamento copiato negli appunti!
-
Previously,
getFQDN()relied on the fully-qualified domain name (FQDN) library to determine its FQDN, and the FQDN library tried to read the/etc/hostsfolder directly. Consequently, on some Red Hat Enterprise Linux CoreOS (RHCOS) installations with uncommon DNS configurations, the FQDN library would fail to install and abort the installation. With this update, mirror registry for Red Hat OpenShift useshostnameto determine the FQDN. As a result, the FQDN library does not fail to install. (PROJQUAY-4139)
3.2.9.13. Mirror registry for Red Hat OpenShift 1.2.6 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.7.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.13.1. New features Copia collegamentoCollegamento copiato negli appunti!
A new feature flag, --no-color (-c) has been added. This feature flag allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands.
3.2.9.14. Mirror registry for Red Hat OpenShift 1.2.5 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.6.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.15. Mirror registry for Red Hat OpenShift 1.2.4 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.5.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.16. Mirror registry for Red Hat OpenShift 1.2.3 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.4.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.17. Mirror registry for Red Hat OpenShift 1.2.2 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.3.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.18. Mirror registry for Red Hat OpenShift 1.2.1 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.2.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.19. Mirror registry for Red Hat OpenShift 1.2.0 Copia collegamentoCollegamento copiato negli appunti!
Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.1.
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.19.1. Bug fixes Copia collegamentoCollegamento copiato negli appunti!
-
Previously, all components and workers running inside of the Quay pod Operator had log levels set to
DEBUG. As a result, large traffic logs were created that consumed unnecessary space. With this update, log levels are set toWARNby default, which reduces traffic information while emphasizing problem scenarios. (PROJQUAY-3504)
3.2.9.20. Mirror registry for Red Hat OpenShift 1.1.0 Copia collegamentoCollegamento copiato negli appunti!
The following advisory is available for the mirror registry for Red Hat OpenShift:
3.2.9.20.1. New features Copia collegamentoCollegamento copiato negli appunti!
A new command,
mirror-registry upgradehas been added. This command upgrades all container images without interfering with configurations or data.NoteIf
quayRootwas previously set to something other than default, it must be passed into the upgrade command.
3.2.9.20.2. Bug fixes Copia collegamentoCollegamento copiato negli appunti!
-
Previously, the absence of
quayHostnameortargetHostnamedid not default to the local hostname. With this update,quayHostnameandtargetHostnamenow default to the local hostname if they are missing. (PROJQUAY-3079) -
Previously, the command
./mirror-registry --versionreturned anunknown flagerror. Now, running./mirror-registry --versionreturns the current version of the mirror registry for Red Hat OpenShift. (PROJQUAY-3086) -
Previously, users could not set a password during installation, for example, when running
./mirror-registry install --initUser <user_name> --initPassword <password> --verbose. With this update, users can set a password during installation. (PROJQUAY-3149) - Previously, the mirror registry for Red Hat OpenShift did not recreate pods if they were destroyed. Now, pods are recreated if they are destroyed. (PROJQUAY-3261)
3.2.10. Troubleshooting mirror registry for Red Hat OpenShift Copia collegamentoCollegamento copiato negli appunti!
To assist in troubleshooting mirror registry for Red Hat OpenShift, you can gather logs of systemd services installed by the mirror registry. The following services are installed:
- quay-app.service
- quay-postgres.service
- quay-redis.service
- quay-pod.service
Prerequisites
- You have installed mirror registry for Red Hat OpenShift.
Procedure
If you installed mirror registry for Red Hat OpenShift with root privileges, you can get the status information of its systemd services by entering the following command:
sudo systemctl status <service>
$ sudo systemctl status <service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you installed mirror registry for Red Hat OpenShift as a standard user, you can get the status information of its systemd services by entering the following command:
systemctl --user status <service>
$ systemctl --user status <service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Mirroring images for a disconnected installation Copia collegamentoCollegamento copiato negli appunti!
You can ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring.
You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the internet. If you do not have access to a mirror host, use the Mirroring Operator catalogs for use with disconnected clusters procedure to copy images to a device you can move across network boundaries with.
3.3.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries:
If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support.
- If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift. The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations.
3.3.2. About the mirror registry Copia collegamentoCollegamento copiato negli appunti!
You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, a small-scale container registry included with OpenShift Container Platform subscriptions.
You can use any container registry that supports Docker v2-2, such as Red Hat Quay, the mirror registry for Red Hat OpenShift, Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.
The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.
If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters.
When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location.
Red Hat does not test third party registries with OpenShift Container Platform.
Additional information
For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source.
3.3.3. Preparing your mirror host Copia collegamentoCollegamento copiato negli appunti!
Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location.
3.3.3.1. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
3.3.4. Configuring credentials that allow images to be mirrored Copia collegamentoCollegamento copiato negli appunti!
Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.
Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry.
This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret.
Prerequisites
- You configured a mirror registry to use in your disconnected environment.
- You identified an image repository location on your mirror registry to mirror images into.
- You provisioned a mirror registry account that allows images to be uploaded to that image repository.
Procedure
Complete the following steps on the installation host:
-
Download your
registry.redhat.iopull secret from the Red Hat OpenShift Cluster Manager. Make a copy of your pull secret in JSON format:
cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json>
$ cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path to the folder to store the pull secret in and a name for the JSON file that you create.
The contents of the file resemble the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the base64-encoded user name and password or token for your mirror registry:
echo -n '<user_name>:<password>' | base64 -w0 BGVtbYk3ZHAtqXs=
$ echo -n '<user_name>:<password>' | base64 -w01 BGVtbYk3ZHAtqXs=Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<user_name>and<password>, specify the user name and password that you configured for your registry.
Edit the JSON file and add a section that describes your registry to it:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file resembles the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.5. Mirroring the OpenShift Container Platform image repository Copia collegamentoCollegamento copiato negli appunti!
Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade.
Prerequisites
- Your mirror host has access to the internet.
- You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured.
- You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository.
- If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates.
Procedure
Complete the following steps on the mirror host:
- Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page.
Set the required environment variables:
Export the release version:
OCP_RELEASE=<release_version>
$ OCP_RELEASE=<release_version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<release_version>, specify the tag that corresponds to the version of OpenShift Container Platform to install, such as4.5.4.Export the local registry name and host port:
LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'
$ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<local_registry_host_name>, specify the registry domain name for your mirror repository, and for<local_registry_host_port>, specify the port that it serves content on.Export the local repository name:
LOCAL_REPOSITORY='<local_repository_name>'
$ LOCAL_REPOSITORY='<local_repository_name>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<local_repository_name>, specify the name of the repository to create in your registry, such asocp4/openshift4.Export the name of the repository to mirror:
PRODUCT_REPO='openshift-release-dev'
$ PRODUCT_REPO='openshift-release-dev'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a production release, you must specify
openshift-release-dev.Export the path to your registry pull secret:
LOCAL_SECRET_JSON='<path_to_pull_secret>'
$ LOCAL_SECRET_JSON='<path_to_pull_secret>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<path_to_pull_secret>, specify the absolute path to and file name of the pull secret for your mirror registry that you created.Export the release mirror:
RELEASE_NAME="ocp-release"
$ RELEASE_NAME="ocp-release"Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a production release, you must specify
ocp-release.Export the type of architecture for your server, such as
x86_64:ARCHITECTURE=<server_architecture>
$ ARCHITECTURE=<server_architecture>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the path to the directory to host the mirrored images:
REMOVABLE_MEDIA_PATH=<path>
$ REMOVABLE_MEDIA_PATH=<path>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the full path, including the initial forward slash (/) character.
Mirror the version images to the mirror registry:
If your mirror host does not have internet access, take the following actions:
- Connect the removable media to a system that is connected to the internet.
Review the images and configuration manifests to mirror:
oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run$ oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-runCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Record the entire
imageContentSourcessection from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add theimageContentSourcessection to theinstall-config.yamlfile during installation. Mirror the images to a directory on the removable media:
oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}$ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take the media to the restricted network environment and upload the images to the local container registry.
oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}$ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
REMOVABLE_MEDIA_PATH, you must use the same path that you specified when you mirrored the images.
ImportantRunning
oc image mirrormight result in the following error:error: unable to retrieve source image. This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the--skip-missingoption to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed.
If the local container registry is connected to the mirror host, take the following actions:
Directly push the release images to the local registry by using following command:
oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}$ oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command pulls the release information as a digest, and its output includes the
imageContentSourcesdata that you require when you install your cluster.Record the entire
imageContentSourcessection from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add theimageContentSourcessection to theinstall-config.yamlfile during installation.NoteThe image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine.
To create the installation program that is based on the content that you mirrored, extract it and pin it to the release:
If your mirror host does not have internet access, run the following command:
oc adm release extract -a ${LOCAL_SECRET_JSON} --icsp-file=<file> \ --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}"$ oc adm release extract -a ${LOCAL_SECRET_JSON} --icsp-file=<file> \ --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the local container registry is connected to the mirror host, run the following command:
oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"$ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content.
You must perform this step on a machine with an active internet connection.
For clusters using installer-provisioned infrastructure, run the following command:
openshift-install
$ openshift-installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.6. The Cluster Samples Operator in a disconnected environment Copia collegamentoCollegamento copiato negli appunti!
In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation.
3.3.6.1. Cluster Samples Operator assistance for mirroring Copia collegamentoCollegamento copiato negli appunti!
During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag.
The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>.
During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed. If you choose to change it to Managed, it installs samples.
The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
-
While the Cluster Samples Operator is set to
Removed, you can create your mirrored registry, or determine which existing mirrored registry you want to use. - Mirror the samples you want to the mirrored registry using the new config map as your guide.
-
Add any of the image streams you did not mirror to the
skippedImagestreamslist of the Cluster Samples Operator configuration object. -
Set
samplesRegistryof the Cluster Samples Operator configuration object to the mirrored registry. -
Then set the Cluster Samples Operator to
Managedto install the image streams you have mirrored.
3.3.7. Mirroring Operator catalogs for use with disconnected clusters Copia collegamentoCollegamento copiato negli appunti!
You can mirror the Operator contents of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2. For a cluster on a restricted network, this registry can be one that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.
- The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.
-
Running
oc adm catalog mirrormight result in the following error:error: unable to retrieve source image. This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the--skip-missingoption to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed.
The oc adm catalog mirror command also automatically mirrors the index image that is specified during the mirroring process, whether it be a Red Hat-provided index image or your own custom-built index image, to the target registry. You can then use the mirrored index image to create a catalog source that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster.
3.3.7.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
Mirroring Operator catalogs for use with disconnected clusters has the following prerequisites:
- Workstation with unrestricted network access.
-
podmanversion 1.9.3 or later. If you want to filter, or prune, the default catalog and selectively mirror only a subset of Operators, see the following sections:
If you want to mirror a Red Hat-provided catalog, run the following command on your workstation with unrestricted network access to authenticate with
registry.redhat.io:podman login registry.redhat.io
$ podman login registry.redhat.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Access to a mirror registry that supports Docker v2-2.
-
On your mirror registry, decide which repository, or namespace, to use for storing mirrored Operator content. For example, you might create an
olm-mirrorrepository. - If your mirror registry does not have internet access, connect removable media to your workstation with unrestricted network access.
If you are working with private registries, including
registry.redhat.io, set theREG_CREDSenvironment variable to the file path of your registry credentials for use in later steps. For example, for thepodmanCLI:REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json$ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.7.2. Extracting and mirroring catalog contents Copia collegamentoCollegamento copiato negli appunti!
The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. The default behavior of the command generates manifests, then automatically mirrors all of the image content from the index image, as well as the index image itself, to your mirror registry.
Alternatively, if your mirror registry is on a completely disconnected, or airgapped, host, you can first mirror the content to removable media, move the media to the disconnected environment, then mirror the content from the media to the registry.
3.3.7.2.1. Mirroring catalog contents to registries on the same network Copia collegamentoCollegamento copiato negli appunti!
If your mirror registry is co-located on the same network as your workstation with unrestricted network access, take the following actions on your workstation.
Procedure
If your mirror registry requires authentication, run the following command to log in to the registry:
podman login <mirror_registry>
$ podman login <mirror_registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to extract and mirror the content to the mirror registry:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the index image for the catalog that you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as
registry.redhat.io/redhat/redhat-operator-index:v4.10. - 2
- Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry
<repository>can be any existing repository, or namespace, on the registry, for exampleolm-mirroras outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the<repository>value from this line, for example<mirror_registry>:<port>. - 3
- Optional: If required, specify the location of your registry credentials file.
{REG_CREDS}is required forregistry.redhat.io. - 4
- Optional: If you do not want to configure trust for the target registry, add the
--insecureflag. - 5
- Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are passed as
'<platform>/<arch>[/<variant>]'. This does not apply to images referenced by the index. Valid values arelinux/amd64,linux/ppc64le,linux/s390x, and.* - 6
- Optional: Generate only the manifests required for mirroring, and do not actually mirror the image content to a registry. This option can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you require only a subset of packages. You can then use the
mapping.txtfile with theoc image mirrorcommand to mirror the modified list of images in a later step. This flag is intended for only advanced selective mirroring of content from the catalog; theopm index prunecommand, if you used it previously to prune the index image, is suitable for most catalog management use cases.
Example output
src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 ... wrote mirroring manifests to manifests-redhat-operator-index-1614211642
src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/1530480781 ... wrote mirroring manifests to manifests-redhat-operator-index-16142116422 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRed Hat Quay does not support nested repositories. As a result, running the
oc adm catalog mirrorcommand will fail with a401unauthorized error. As a workaround, you can use the--max-components=2option when running theoc adm catalog mirrorcommand to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution.
3.3.7.2.2. Mirroring catalog contents to airgapped registries Copia collegamentoCollegamento copiato negli appunti!
If your mirror registry is on a completely disconnected, or airgapped, host, take the following actions.
Procedure
Run the following command on your workstation with unrestricted network access to mirror the content to local files:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the index image for the catalog that you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as
registry.redhat.io/redhat/redhat-operator-index:v4.10. - 2
- Specify the content to mirror to local files in your current directory.
- 3
- Optional: If required, specify the location of your registry credentials file.
- 4
- Optional: If you do not want to configure trust for the target registry, add the
--insecureflag. - 5
- Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as
'<platform>/<arch>[/<variant>]'. This does not apply to images referenced by the index. Valid values arelinux/amd64,linux/ppc64le,linux/s390x, and.*
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates a
v2/directory in your current directory.-
Copy the
v2/directory to removable media. - Physically remove the media and attach it to a host in the disconnected environment that has access to the mirror registry.
If your mirror registry requires authentication, run the following command on your host in the disconnected environment to log in to the registry:
podman login <mirror_registry>
$ podman login <mirror_registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command from the parent directory containing the
v2/directory to upload the images from local files to the mirror registry:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
file://path from the previous command output. - 2
- Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry
<repository>can be any existing repository, or namespace, on the registry, for exampleolm-mirroras outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the<repository>value from this line, for example<mirror_registry>:<port>. - 3
- Optional: If required, specify the location of your registry credentials file.
- 4
- Optional: If you do not want to configure trust for the target registry, add the
--insecureflag. - 5
- Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as
'<platform>/<arch>[/<variant>]'. This does not apply to images referenced by the index. Valid values arelinux/amd64,linux/ppc64le,linux/s390x, and.*
NoteRed Hat Quay does not support nested repositories. As a result, running the
oc adm catalog mirrorcommand will fail with a401unauthorized error. As a workaround, you can use the--max-components=2option when running theoc adm catalog mirrorcommand to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution.Run the
oc adm catalog mirrorcommand again. Use the newly mirrored index image as the source and the same mirror registry target used in the previous step:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
--manifests-onlyflag is required for this step so that the command does not copy all of the mirrored content again.
ImportantThis step is required because the image mappings in the
imageContentSourcePolicy.yamlfile generated during the previous step must be updated from local paths to valid mirror locations. Failure to do so will cause errors when you create theImageContentSourcePolicyobject in a later step.
After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to enable installation of Operators from OperatorHub.
3.3.7.3. Generated manifests Copia collegamentoCollegamento copiato negli appunti!
After mirroring Operator catalog content to your mirror registry, a manifests directory is generated in your current directory.
If you mirrored content to a registry on the same network, the directory name takes the following pattern:
manifests-<index_image_name>-<random_number>
manifests-<index_image_name>-<random_number>
If you mirrored content to a registry on a disconnected host in the previous section, the directory name takes the following pattern:
manifests-index/<repository>/<index_image_name>-<random_number>
manifests-index/<repository>/<index_image_name>-<random_number>
The manifests directory name is referenced in subsequent procedures.
The manifests directory contains the following files, some of which might require further modification:
The
catalogSource.yamlfile is a basic definition for aCatalogSourceobject that is pre-populated with your index image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster.ImportantIf you mirrored the content to local files, you must modify your
catalogSource.yamlfile to remove any backslash (/) characters from themetadata.namefield. Otherwise, when you attempt to create the object, it fails with an "invalid resource name" error.The
imageContentSourcePolicy.yamlfile defines anImageContentSourcePolicyobject that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.NoteIf your cluster uses an
ImageContentSourcePolicyobject to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project.The
mapping.txtfile contains all of the source images and where to map them in the target registry. This file is compatible with theoc image mirrorcommand and can be used to further customize the mirroring configuration.ImportantIf you used the
--manifests-onlyflag during the mirroring process and want to further trim the subset of packages to mirror, see the steps in the Mirroring a package manifest format catalog image procedure of the OpenShift Container Platform 4.7 documentation about modifying yourmapping.txtfile and using the file with theoc image mirrorcommand.
3.3.7.4. Post-installation requirements Copia collegamentoCollegamento copiato negli appunti!
After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to populate and enable installation of Operators from OperatorHub.
3.3.8. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere, bare metal, or Amazon Web Services.
3.4. Mirroring images for a disconnected installation using the oc-mirror plugin Copia collegamentoCollegamento copiato negli appunti!
You can ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring.
You can use the oc-mirror OpenShift CLI (oc) plugin to mirror images to a mirror registry in your fully or partially disconnected environments.
Mirroring images for disconnected environments using the oc-mirror plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following steps outline the high-level workflow on how to use the oc-mirror plugin to mirror images to a mirror registry:
- Create an image set configuration file.
- Mirror the image set to the mirror registry.
- Configure your cluster to use the resources generated by the oc-mirror plugin.
- Repeat these steps to update your mirror registry as necessary.
3.4.1. About the oc-mirror plugin Copia collegamentoCollegamento copiato negli appunti!
You can use the oc-mirror OpenShift CLI (oc) plugin to mirror all required OpenShift Container Platform content and other images to your mirror registry by using a single tool. It provides the following features:
- Provides a centralized method to mirror OpenShift Container Platform releases, Operators, helm charts, and other images.
- Maintains update paths for OpenShift Container Platform and Operators.
- Uses a declarative image set configuration file to include only the OpenShift Container Platform releases, Operators, and images that your cluster needs.
- Performs incremental mirroring, which reduces the size of future image sets.
When using the oc-mirror plugin, you specify which content to mirror in an image set configuration file. In this YAML file, you can fine-tune the configuration to only include the OpenShift Container Platform releases and Operators that your cluster needs. This reduces the amount of data that you need to download and transfer. The oc-mirror plugin can also mirror arbitrary helm charts and additional container images to assist users in seamlessly synchronizing their workloads onto mirror registries.
The first time you run the oc-mirror plugin, it populates your mirror registry with the required content to perform your disconnected cluster installation. In order for your disconnected cluster to continue receiving updates, you must keep your mirror registry updated. To update your mirror registry, you run the oc-mirror plugin using the same configuration as the first time you ran it. The oc-mirror plugin references the metadata from the storage backend and only downloads what has been released since the last time you ran the tool. This provides update paths for OpenShift Container Platform and Operators and performs dependency resolution as required.
When using the oc-mirror CLI plugin to populate a mirror registry, any further updates to the mirror registry must be made using the oc-mirror tool.
3.4.2. About the mirror registry Copia collegamentoCollegamento copiato negli appunti!
You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry that supports Docker v2-2, such as Red Hat Quay. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, which is a small-scale container registry included with OpenShift Container Platform subscriptions.
Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.
The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.
If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters.
When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location.
Red Hat does not test third party registries with OpenShift Container Platform.
3.4.3. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay.
NoteIf you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support.
If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift. The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations.
3.4.4. Preparing your mirror hosts Copia collegamentoCollegamento copiato negli appunti!
Before you can use the oc-mirror plugin to mirror images, you must install the plugin and create a container image registry credentials file to allow the mirroring from Red Hat to your mirror.
3.4.4.1. Installing the oc-mirror OpenShift CLI plugin Copia collegamentoCollegamento copiato negli appunti!
To use the oc-mirror OpenShift CLI plugin to mirror registry images, you must install the plugin. If you are mirroring image sets in a fully disconnected environment, ensure that you install the oc-mirror plugin on the host with internet access and the host in the disconnected environment with access to the mirror registry.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Download the oc-mirror CLI plugin.
- Navigate to the Downloads page of the OpenShift Cluster Manager.
- Under the OpenShift disconnected installation tools section, click Download for OpenShift Client (oc) mirror plugin and save the file.
Extract the archive:
tar xvzf oc-mirror.tar.gz
$ tar xvzf oc-mirror.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow If necessary, update the plugin file to be executable:
chmod +x oc-mirror
$ chmod +x oc-mirrorCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not rename the
oc-mirrorfile.Install the oc-mirror CLI plugin by placing the file in your
PATH, for example,/usr/local/bin:sudo mv oc-mirror /usr/local/bin/.
$ sudo mv oc-mirror /usr/local/bin/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run
oc mirror helpto verify that the plugin was successfully installed:oc mirror help
$ oc mirror helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.4.2. Configuring credentials that allow images to be mirrored Copia collegamentoCollegamento copiato negli appunti!
Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.
Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry.
This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret.
Prerequisites
- You configured a mirror registry to use in your disconnected environment.
- You identified an image repository location on your mirror registry to mirror images into.
- You provisioned a mirror registry account that allows images to be uploaded to that image repository.
Procedure
Complete the following steps on the installation host:
-
Download your
registry.redhat.iopull secret from the Red Hat OpenShift Cluster Manager. Make a copy of your pull secret in JSON format:
cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json>
$ cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path to the folder to store the pull secret in and a name for the JSON file that you create.
Save the file either as
~/.docker/config.jsonor$XDG_RUNTIME_DIR/containers/auth.json.The contents of the file resemble the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the base64-encoded user name and password or token for your mirror registry:
echo -n '<user_name>:<password>' | base64 -w0 BGVtbYk3ZHAtqXs=
$ echo -n '<user_name>:<password>' | base64 -w01 BGVtbYk3ZHAtqXs=Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<user_name>and<password>, specify the user name and password that you configured for your registry.
Edit the JSON file and add a section that describes your registry to it:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file resembles the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.5. Creating the image set configuration Copia collegamentoCollegamento copiato negli appunti!
Before you can use the oc-mirror plugin to mirror image sets, you must create an image set configuration file. This image set configuration file defines which OpenShift Container Platform releases, Operators, and other images to mirror, along with other configuration settings for the oc-mirror plugin.
You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports Docker v2-2. The oc-mirror plugin stores metadata in this storage backend during image set creation.
Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry.
Procedure
Create an
ImageSetConfigurationresource that specifies the necessary configuration details:Example
ImageSetConfigurationfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The maximum size, in GiB, of each file within the image set.
- 2
- The channel to retrieve the OpenShift Container Platform images from.
- 3
- The Operator catalog to retrieve the OpenShift Container Platform images from.
- 4
- The back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify
storageConfigvalues. - 5
- The registry URL for the storage backend.
This example pulls images from the
stable-4.9channel for theregistry.redhat.io/redhat/redhat-operator-index:v4.9operator catalog and saves the image set metadata to theexample.com/example/oc-mirrorregistry.-
Save the file as
imageset-config.yaml. This file is required by theoc mirrorcommand when mirroring content.
3.4.6. Mirroring an image set to a mirror registry Copia collegamentoCollegamento copiato negli appunti!
You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a partially disconnected environment or in a fully disconnected environment.
These procedures assume that you already have your mirror registry set up.
3.4.6.1. Mirroring an image set in a partially disconnected environment Copia collegamentoCollegamento copiato negli appunti!
In a partially disconnected environment, you can mirror an image set directly to the target mirror registry.
3.4.6.1.1. Mirroring from mirror to mirror Copia collegamentoCollegamento copiato negli appunti!
You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation.
Depending on the configuration specified in the image set configuration file, using oc-mirror to mirror images might download several hundreds of gigabytes of data to disk before mirroring to the destination mirror registry.
The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plugin again, the generated image set is often smaller.
You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation.
Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry.
Prerequisites
- You have access to the internet to obtain the necessary container images.
-
You have installed the OpenShift CLI (
oc). -
You have installed the
oc-mirrorCLI plugin. - You have created the image set configuration file.
Procedure
Run the
oc mirrorcommand to mirror the images from the specified image set configuration to a specified registry:oc mirror --config=./imageset-config.yaml \ docker://registry.example:5000
$ oc mirror --config=./imageset-config.yaml \1 docker://registry.example:50002 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the image set configuration file that was created. This procedure assumes that it is named
imageset-config.yaml. - 2
- Specify the registry to mirror the image set file to. The registry must start with
docker://. If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions.
Verification
-
Navigate into the
oc-mirror-workspace/directory that was generated. -
Navigate into the results directory, for example,
results-1639608409/. -
Verify that YAML files are present for the
ImageContentSourcePolicyandCatalogSourceresources.
Next steps
- Configure your cluster to use the resources generated by oc-mirror.
3.4.6.2. Mirroring an image set in a fully disconnected environment Copia collegamentoCollegamento copiato negli appunti!
To mirror an image set in a fully disconnected environment, you must first mirror the image set to disk, then mirror the image set file on disk to a mirror.
3.4.6.2.1. Mirroring from mirror to disk Copia collegamentoCollegamento copiato negli appunti!
You can use the oc-mirror plugin to generate an image set and save the contents to disk. The generated image set can then be transferred to the disconnected environment and mirrored to the target registry.
Depending on the configuration specified in the image set configuration file, using oc-mirror to mirror images might download several hundreds of gigabytes of data to disk.
The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plugin again, the generated image set is often smaller.
You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation.
Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry.
Prerequisites
- You have access to the internet to obtain the necessary container images.
-
You have installed the OpenShift CLI (
oc). -
You have installed the
oc-mirrorCLI plugin. - You have created the image set configuration file.
Procedure
Run the
oc mirrorcommand to mirror the images from the specified image set configuration to disk:oc mirror --config=./imageset-config.yaml \ file://<path_to_output_directory>
$ oc mirror --config=./imageset-config.yaml \1 file://<path_to_output_directory>2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Navigate to your output directory:
cd <path_to_output_directory>
$ cd <path_to_output_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that an image set
.tarfile was created:ls
$ lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
mirror_seq1_000000.tar
mirror_seq1_000000.tarCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Transfer the image set .tar file to the disconnected environment.
3.4.6.2.2. Mirroring from disk to mirror Copia collegamentoCollegamento copiato negli appunti!
You can use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry.
Prerequisites
-
You have installed the OpenShift CLI (
oc) in the disconnected environment. -
You have installed the
oc-mirrorCLI plugin in the disconnected environment. -
You have generated the image set file by using the
oc mirrorcommand. - You have transferred the image set file to the disconnected environment.
Procedure
Run the
oc mirrorcommand to process the image set file on disk and mirror the contents to a target mirror registry:oc mirror --from=./mirror_seq1_000000.tar \ docker://registry.example:5000
$ oc mirror --from=./mirror_seq1_000000.tar \1 docker://registry.example:50002 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the image set .tar file to mirror, named
mirror_seq1_000000.tarin this example. If anarchiveSizevalue was specified in the image set configuration file, the image set might be broken up into multiple .tar files. In this situation, you can pass in a directory that contains the image set .tar files. - 2
- Specify the registry to mirror the image set file to. The registry must start with
docker://. If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions.
This command updates the mirror registry with the image set and generates the
ImageContentSourcePolicyandCatalogSourceresources.
Verification
-
Navigate into the
oc-mirror-workspace/directory that was generated. -
Navigate into the results directory, for example,
results-1639608409/. -
Verify that YAML files are present for the
ImageContentSourcePolicyandCatalogSourceresources.
Next steps
- Configure your cluster to use the resources generated by oc-mirror.
3.4.7. Configuring your cluster to use the resources generated by oc-mirror Copia collegamentoCollegamento copiato negli appunti!
After you have mirrored your image set to the mirror registry, you must apply the generated ImageContentSourcePolicy, CatalogSource, and release image signature resources into the cluster.
The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry. The release image signatures are used to verify the mirrored release images.
Prerequisites
- You have mirrored the image set to the registry mirror in the disconnected environment.
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
-
Log in to the OpenShift CLI as a user with the
cluster-adminrole. Apply the YAML files from the results directory to the cluster by running the following command:
oc apply -f ./oc-mirror-workspace/results-1639608409/
$ oc apply -f ./oc-mirror-workspace/results-1639608409/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the release image signatures to the cluster by running the following command:
oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/
$ oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
ImageContentSourcePolicyresources were successfully installed by running the following command:oc get imagecontentsourcepolicy --all-namespaces
$ oc get imagecontentsourcepolicy --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
CatalogSourceresources were successfully installed by running the following command:oc get catalogsource --all-namespaces
$ oc get catalogsource --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.8. Updating your mirror registry Copia collegamentoCollegamento copiato negli appunti!
After you publish a full image set to the mirror registry, you can use the oc-mirror plugin to update the mirror registry with updated images.
When you run the oc-mirror plugin again, it generates an image set that only contains new and updated images since the previous execution.
You must use the same storage backend as the initial execution of oc-mirror for the same mirror registry. Do not delete or modify the metadata that is generated by the oc-mirror plugin.
Because it only pulls in the differences since the previous image set was created, the generated image set is often smaller and faster to process than the initial image set.
Generated image sets are sequential and must be synchronized to the target mirror registry in order.
Prerequisites
- You have used the oc-mirror plugin to mirror the initial image set to your mirror registry.
- You have access to the storage backend that was used for the initial execution of the oc-mirror plugin.
Procedure
Follow the same steps that you used to create the initial image set and mirror it to the mirror registry. For instructions, see Mirroring an image set in a partially disconnected environment or Mirroring an image set in a fully disconnected environment.
Important- You must provide the same storage backend so that only a differential image set is created and mirrored.
- If you specified a top-level namespace for the mirror registry during the initial image set creation, then you must use this same namespace every time you run the oc-mirror plugin for the same mirror registry.
- Configure your cluster to use the resources generated by oc-mirror.
3.4.9. Image set configuration parameters Copia collegamentoCollegamento copiato negli appunti!
The oc-mirror plugin requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource.
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the |
String. For example: |
|
| The maximum size, in GiB, of each archive file within the image set. |
Integer. For example: |
|
| The configuration of the image set. | Object |
|
| The additional images configuration of the image set. | Array of objects. For example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest
|
|
| The tag of the image to mirror. |
String. For example: |
|
| The helm configuration of the image set. Note that the oc-mirror plugin supports only helm charts that do not require user input when rendered. | Object |
|
| The local helm charts to mirror. | Array of objects. For example: local:
- name: podinfo
path: /test/podinfo-5.0.0.tar.gz
|
|
| The name of the local helm chart to mirror. |
String. For example: |
|
| The path of the local helm chart to mirror. |
String. For example: |
|
| The remote helm repositories to mirror from. | Array of objects. For example: |
|
| The name of the helm repository to mirror from. |
String. For example: |
|
| The URL of the helm repository to mirror from. |
String. For example: |
|
| The remote helm charts to mirror. | Array of objects. |
|
| The name of the helm chart to mirror. |
String. For example: |
|
| The version of the named helm chart to mirror. |
String. For example: |
|
| The platform configuration of the image set. | Object |
|
| The platform channel configuration of the image set. | Array of objects. For example: channels:
- name: stable-4.7
- name: stable-4.6
versions:
- '4.6.36'
|
|
| The name of the release channel. |
String. For example: |
|
| The list of release versions within the named channel. |
String. For example: |
|
| The Operators configuration of the image set. | Array of objects. For example: |
|
| The Operator catalog to include in the image set. |
String. For example: |
|
|
Toggles between downloading channel HEADs and full channels. Cannot be used in conjunction with | Boolean |
|
| The Operator packages configuration. | Array of objects. For example: |
|
| The Operator package name to include in the image set |
String. For example: |
|
|
The starting version of the Operator package to mirror. All versions of the Operator are mirrored between the value of |
String. For example: |
|
| The Operator package channel configuration. | Object |
|
| The Operator channel name, unique within a package, to include in the image set. |
String. For example: |
|
|
The starting version of the Operator channel to mirror. All versions of the Operator are mirrored between the value of |
String. For example: |
|
| The back-end configuration of the image set. | Object |
|
| The local back-end configuration of the image set. | Object |
|
| The path of the directory to contain the image set metadata. |
String. For example: |
|
| The registry back-end configuration of the image set. | Object |
|
| The back-end registry URI. Can optionally include a namespace reference in the URI. |
String. For example: |
|
| Optionally skip TLS verification of the referenced back-end registry. |
Boolean. The default value is |
3.4.10. Image set configuration examples Copia collegamentoCollegamento copiato negli appunti!
The following ImageSetConfiguration file examples show the configuration for various mirroring use cases.
Use case: Including arbitrary images and helm charts
The following ImageSetConfiguration file uses a registry storage backend and includes helm charts and an additional Red Hat Universal Base Image (UBI).
Example ImageSetConfiguration file
Use case: Including specific Operator versions
The following ImageSetConfiguration file uses a local storage backend and includes only the Red Hat Advanced Cluster Security for Kubernetes Operator, versions starting at 3.67.0 and later.
Example ImageSetConfiguration file
3.4.11. Command reference for oc-mirror Copia collegamentoCollegamento copiato negli appunti!
The following tables describe the oc mirror subcommands and flags:
| Subcommand | Description |
|---|---|
|
| Generate the autocompletion script for the specified shell. |
|
| Output the contents of an image set. |
|
| Show help about any subcommand. |
|
| List available platform and Operator content and their version. |
|
| Output the oc-mirror version. |
| Flag | Description |
|---|---|
|
| Specify the path to an image set configuration file. |
|
| If any non image-pull related error occurs, continue and attempt to mirror as much as possible. |
|
| Disable TLS validation for the target registry. |
|
| Use plain HTTP for the target registry. |
|
| Print actions without mirroring images. |
|
| Specify the path to an image set archive that was generated by an execution of oc-mirror to load into a target registry. |
|
| Show the help. |
|
|
Specify the number for the log level verbosity. Valid values are |
|
|
Generate manifests for |
|
| Skip removal of artifact directories. |
|
| Do not replace image tags with digest pins in Operator catalogs. |
|
| If an image is not found, skip it instead of reporting an error and aborting execution. Does not apply to custom images explicitly specified in the image set configuration. |
|
| Skip digest verification. |
|
| Disable TLS validation for the source registry. |
|
| Use plain HTTP for the source registry. |
Chapter 4. Installing on Alibaba Copia collegamentoCollegamento copiato negli appunti!
4.1. Preparing to install on Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
4.1.2. Requirements for installing OpenShift Container Platform on Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
Before installing OpenShift Container Platform on Alibaba Cloud, you must configure and register your domain, create a Resource Access Management (RAM) user for the installation, and review the supported Alibaba Cloud data center regions and zones for the installation.
4.1.3. Registering and Configuring Alibaba Cloud Domain Copia collegamentoCollegamento copiato negli appunti!
To install OpenShift Container Platform, the Alibaba Cloud account you use must have a dedicated public hosted zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Alibaba Cloud or another source.
NoteIf you purchase a new domain through Alibaba Cloud, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through Alibaba Cloud, see Alibaba Cloud domains.
- If you are using an existing domain and registrar, migrate its DNS to Alibaba Cloud. See Domain name transfer in the Alibaba Cloud documentation.
Configure DNS for your domain. This includes:
- Registering a generic domain name.
- Completing real-name verification for your domain name.
- Applying for an Internet Content Provider (ICP) filing.
Enabling domain name resolution.
Use an appropriate root domain, such as
openshiftcorp.com, or subdomain, such asclusters.openshiftcorp.com.
- If you are using a subdomain, follow the procedures of your company to add its delegation records to the parent domain.
4.1.4. Supported Alibaba regions Copia collegamentoCollegamento copiato negli appunti!
You can deploy an OpenShift Container Platform cluster to the regions listed in the Alibaba Regions and zones documentation.
4.1.5. Next steps Copia collegamentoCollegamento copiato negli appunti!
4.2. Creating the required Alibaba Cloud resources Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, you must use the Alibaba Cloud console to create a Resource Access Management (RAM) user that has sufficient permissions to install OpenShift Container Platform into your Alibaba Cloud. This user must also have permissions to create new RAM users. You can also configure and use the ccoctl tool to create new credentials for the OpenShift Container Platform components with the permissions that they require.
Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.2.1. Creating the required RAM user Copia collegamentoCollegamento copiato negli appunti!
You must have a Alibaba Cloud Resource Access Management (RAM) user for the installation that has sufficient privileges. You can use the Alibaba Cloud Resource Access Management console to create a new user or modify an existing user. Later, you create credentials in OpenShift Container Platform based on this user’s permissions.
When you configure the RAM user, be sure to consider the following requirements:
The user must have an Alibaba Cloud AccessKey ID and AccessKey secret pair.
-
For a new user, you can select
Open API Accessfor the Access Mode when creating the user. This mode generates the required AccessKey pair. For an existing user, you can add an AccessKey pair or you can obtain the AccessKey pair for that user.
NoteWhen created, the AccessKey secret is displayed only once. You must immediately save the AccessKey pair because the AccessKey pair is required for API calls.
-
For a new user, you can select
Add the AccessKey ID and secret to the
~/.alibabacloud/credentialsfile on your local computer. Alibaba Cloud automatically creates this file when you log in to the console. The Cloud Credential Operator (CCO) utility, ccoutil, uses these credentials when processingCredential Requestobjects.For example:
[default] # Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key access_key_secret = wYx56mszAN4Uunfh # Secret
[default] # Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key1 access_key_secret = wYx56mszAN4Uunfh # SecretCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add your AccessKeyID and AccessKeySecret here.
The RAM user must have the
AdministratorAccesspolicy to ensure that the account has sufficient permission to create the OpenShift Container Platform cluster. This policy grants permissions to manage all Alibaba Cloud resources.When you attach the
AdministratorAccesspolicy to a RAM user, you grant that user full access to all Alibaba Cloud services and resources. If you do not want to create a user with full access, create a custom policy with the following actions that you can add to your RAM user for installation. These actions are sufficient to install OpenShift Container Platform.TipYou can copy and paste the following JSON code into the Alibaba Cloud console to create a custom poicy. For information on creating custom policies, see Create a custom policy in the Alibaba Cloud documentation.
Example 4.1. Example custom policy JSON file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information about creating a RAM user and granting permissions, see Create a RAM user and Grant permissions to a RAM user in the Alibaba Cloud documentation.
4.2.2. Configuring the Cloud Credential Operator utility Copia collegamentoCollegamento copiato negli appunti!
To assign RAM users and policies that provide long-lived RAM AccessKeys (AKs) for each in-cluster component, extract and prepare the Cloud Credential Operator (CCO) utility (ccoctl) binary.
The ccoctl utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc).
Procedure
Obtain the OpenShift Container Platform release image:
RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the CCO container image from the OpenShift Container Platform release image:
CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE)
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE)Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the architecture of the
$RELEASE_IMAGEmatches the architecture of the environment in which you will use theccoctltool.Extract the
ccoctlbinary from the CCO container image within the OpenShift Container Platform release image:oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change the permissions to make
ccoctlexecutable:chmod 775 ccoctl
$ chmod 775 ccoctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that
ccoctlis ready to use, display the help file:ccoctl --help
$ ccoctl --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Output of
ccoctl --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
Install a cluster on Alibaba Cloud infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
- Installing a cluster quickly on Alibaba Cloud: You can install a cluster quickly by using the default configuration options.
- Installing a customized cluster on Alibaba Cloud: The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
4.3. Installing a cluster quickly on Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster on Alibaba Cloud that uses the default configuration options.
Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.3.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You registered your domain.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
- You have created the required Alibaba Cloud resources.
- If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials.
4.3.2. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.3.3. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.3.4. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.3.5. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select alibabacloud as the platform to target.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Provide a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the
install-config.yamlfile to set thecredentialsModeparameter toManual:Example install-config.yaml configuration file with
credentialsModeset toManualCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add this line to set the
credentialsModetoManual.
Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.3.6. Generating the required installation manifests Copia collegamentoCollegamento copiato negli appunti!
You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
Procedure
Generate the manifests by running the following command from the directory that contains the installation program:
openshift-install create manifests --dir <installation_directory>
$ openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>- Specifies the directory in which the installation program creates files.
4.3.7. Creating credentials for OpenShift Container Platform components with the ccoctl tool Copia collegamentoCollegamento copiato negli appunti!
You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component.
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctlbinary. - Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster.
-
Added the AccessKeyID (
access_key_id) and AccessKeySecret (access_key_secret) of that RAM user into the~/.alibabacloud/credentialsfile on your local computer.
Procedure
Set the
$RELEASE_IMAGEvariable by running the following command:RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ $RELEASE_IMAGE
$ oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \1 $RELEASE_IMAGECopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
credrequestsis the directory where the list ofCredentialsRequestobjects is stored. This command creates the directory if it does not exist.
NoteThis command can take a few moments to run.
If your cluster uses cluster capabilities to disable one or more optional components, delete the
CredentialsRequestcustom resources for any disabled components.Example
credrequestsdirectory contents for OpenShift Container Platform 4.12 on Alibaba Cloud0000_30_machine-api-operator_00_credentials-request.yaml 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml
0000_30_machine-api-operator_00_credentials-request.yaml1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ccoctltool to process allCredentialsRequestobjects in thecredrequestsdirectory:Run the following command to use the tool:
ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir>
$ ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
-
<name>is the name used to tag any cloud resources that are created for tracking. -
<alibaba_region>is the Alibaba Cloud region in which cloud resources will be created. -
<path_to_directory_with_list_of_credentials_requests>/credrequestsis the directory containing the files for the componentCredentialsRequestobjects. -
<path_to_ccoctl_output_dir>is the directory where the generated component credentials secrets will be placed.
NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA RAM user can have up to two AccessKeys at the same time. If you run
ccoctl alibabacloud create-ram-usersmore than twice, the previous generated manifests secret becomes stale and you must reapply the newly generated secrets.-
Verify that the OpenShift Container Platform secrets are created:
ls <path_to_ccoctl_output_dir>/manifests
$ ls <path_to_ccoctl_output_dir>/manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml
openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies.
Copy the generated credential files to the target manifests directory:
cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/
$ cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<path_to_ccoctl_output_dir>-
Specifies the directory created by the
ccoctl alibabacloud create-ram-userscommand. <path_to_installation_dir>- Specifies the directory in which the installation program creates files.
4.3.8. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
4.3.9. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
4.3.10. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.11. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.3.12. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.3.13. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
4.4. Installing a cluster on Alibaba Cloud with customizations Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a customized cluster on infrastructure that the installation program provisions on Alibaba Cloud. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes.
Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.4.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You registered your domain.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain Resource Access Management (RAM) credentials.
4.4.2. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.4.3. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.4.4. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.4.4.1. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select alibabacloud as the platform to target.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Provide a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the
install-config.yamlfile to set thecredentialsModeparameter toManual:Example install-config.yaml configuration file with
credentialsModeset toManualCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add this line to set the
credentialsModetoManual.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.4.4.2. Generating the required installation manifests Copia collegamentoCollegamento copiato negli appunti!
You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
Procedure
Generate the manifests by running the following command from the directory that contains the installation program:
openshift-install create manifests --dir <installation_directory>
$ openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>- Specifies the directory in which the installation program creates files.
4.4.4.3. Creating credentials for OpenShift Container Platform components with the ccoctl tool Copia collegamentoCollegamento copiato negli appunti!
You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component.
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctlbinary. - Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster.
-
Added the AccessKeyID (
access_key_id) and AccessKeySecret (access_key_secret) of that RAM user into the~/.alibabacloud/credentialsfile on your local computer.
Procedure
Set the
$RELEASE_IMAGEvariable by running the following command:RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ $RELEASE_IMAGE
$ oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \1 $RELEASE_IMAGECopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
credrequestsis the directory where the list ofCredentialsRequestobjects is stored. This command creates the directory if it does not exist.
NoteThis command can take a few moments to run.
If your cluster uses cluster capabilities to disable one or more optional components, delete the
CredentialsRequestcustom resources for any disabled components.Example
credrequestsdirectory contents for OpenShift Container Platform 4.12 on Alibaba Cloud0000_30_machine-api-operator_00_credentials-request.yaml 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml
0000_30_machine-api-operator_00_credentials-request.yaml1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ccoctltool to process allCredentialsRequestobjects in thecredrequestsdirectory:Run the following command to use the tool:
ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir>
$ ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
-
<name>is the name used to tag any cloud resources that are created for tracking. -
<alibaba_region>is the Alibaba Cloud region in which cloud resources will be created. -
<path_to_directory_with_list_of_credentials_requests>/credrequestsis the directory containing the files for the componentCredentialsRequestobjects. -
<path_to_ccoctl_output_dir>is the directory where the generated component credentials secrets will be placed.
NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA RAM user can have up to two AccessKeys at the same time. If you run
ccoctl alibabacloud create-ram-usersmore than twice, the previous generated manifests secret becomes stale and you must reapply the newly generated secrets.-
Verify that the OpenShift Container Platform secrets are created:
ls <path_to_ccoctl_output_dir>/manifests
$ ls <path_to_ccoctl_output_dir>/manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml
openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies.
Copy the generated credential files to the target manifests directory:
cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/
$ cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<path_to_ccoctl_output_dir>-
Specifies the directory created by the
ccoctl alibabacloud create-ram-userscommand. <path_to_installation_dir>- Specifies the directory in which the installation program creates files.
4.4.4.4. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
4.4.4.4.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.4.4.4.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.4.4.4.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to Important
If the value of the field is set to |
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
4.4.4.4.4. Additional Alibaba Cloud configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Additional Alibaba Cloud configuration parameters are described in the following table. The alibabacloud parameters are the configuration used when installing on Alibaba Cloud. The defaultMachinePlatform parameters are the default configuration used when installing on Alibaba Cloud for machine pools that do not define their own platform configuration.
These parameters apply to both compute machines and control plane machines where specified.
If defined, the parameters compute.platform.alibabacloud and controlPlane.platform.alibabacloud will overwrite platform.alibabacloud.defaultMachinePlatform settings for compute machines and control plane machines respectively.
| Parameter | Description | Values |
|---|---|---|
|
| The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. | String. |
|
|
InstanceType defines the ECS instance type. Example: | String. |
|
|
Defines the category of the system disk. Examples: | String. |
|
| Defines the size of the system disk in gibibytes (GiB). | Integer. |
|
|
The list of availability zones that can be used. Examples: | String list. |
|
| The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. | String. |
|
|
InstanceType defines the ECS instance type. Example: | String. |
|
|
Defines the category of the system disk. Examples: | String. |
|
| Defines the size of the system disk in gibibytes (GiB). | Integer. |
|
|
The list of availability zones that can be used. Examples: | String list. |
|
| Required.The Alibaba Cloud region where the cluster will be created. | String. |
|
| The ID of an already existing resource group where the cluster will be installed. If empty, the installer will create a new resource group for the cluster. | String. |
|
| Additional keys and values to apply to all Alibaba Cloud resources created for the cluster. | Object. |
|
| The ID of an already existing VPC where the cluster should be installed. If empty, the installer will create a new VPC for the cluster. | String. |
|
| The ID list of already existing VSwitches where cluster resources will be created. The existing VSwitches can only be used when also using existing VPC. If empty, the installer will create new VSwitches for the cluster. | String list. |
|
| For both compute machines and control plane machines, the image ID that should be used to create ECS instance. If set, the image ID should belong to the same region as the cluster. | String. |
|
|
For both compute machines and control plane machines, the ECS instance type used to create the ECS instance. Example: | String. |
|
|
For both compute machines and control plane machines, the category of the system disk. Examples: |
String, for example "", |
|
|
For both compute machines and control plane machines, the size of the system disk in gibibytes (GiB). The minimum is | Integer. |
|
|
For both compute machines and control plane machines, the list of availability zones that can be used. Examples: | String list. |
|
| The ID of an existing private zone into which to add DNS records for the cluster’s internal API. An existing private zone can only be used when also using existing VPC. The private zone must be associated with the VPC containing the subnets. Leave the private zone unset to have the installer create the private zone on your behalf. | String. |
4.4.4.5. Sample customized install-config.yaml file for Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your cluster’s platform or modify the values of the required parameters.
- 1
- Required. The installation program prompts you for a cluster name.
- 2
- The cluster network plugin to install. The supported values are
OVNKubernetesandOpenShiftSDN. The default value isOVNKubernetes. - 3
- Optional. Specify parameters for machine pools that do not define their own platform configuration.
- 4
- Required. The installation program prompts you for the region to deploy the cluster to.
- 5
- Optional. Specify an existing resource group where the cluster should be installed.
- 7
- Required. The installation program prompts you for the pull secret.
- 8
- Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster.
- 6
- Optional. These are example vswitchID values.
4.4.4.6. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.4.5. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
4.4.6. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
4.4.7. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.8. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.4.9. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.4.10. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
4.5. Installing a cluster on Alibaba Cloud with network customizations Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can install a cluster on Alibaba Cloud with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.
Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.5.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You registered your domain.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain Resource Access Management (RAM) credentials.
4.5.2. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.5.3. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.5.4. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.5.5. Network configuration phases Copia collegamentoCollegamento copiato negli appunti!
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork networking.machineNetworkFor more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetworkto match the CIDR that the preferred NIC resides in.ImportantThe CIDR range
172.17.0.0/16is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.
4.5.5.1. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.5.5.2. Generating the required installation manifests Copia collegamentoCollegamento copiato negli appunti!
You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
Procedure
Generate the manifests by running the following command from the directory that contains the installation program:
openshift-install create manifests --dir <installation_directory>
$ openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>- Specifies the directory in which the installation program creates files.
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctlbinary.
Procedure
Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:<1> `credrequests` is the directory where the list of `CredentialsRequest` objects is stored. This command creates the directory if it does not exist.
<1> `credrequests` is the directory where the list of `CredentialsRequest` objects is stored. This command creates the directory if it does not exist.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command can take a few moments to run.
4.5.5.3. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
4.5.5.3.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.5.5.3.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.5.5.3.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to Important
If the value of the field is set to |
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
4.5.5.4. Sample customized install-config.yaml file for Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your cluster’s platform or modify the values of the required parameters.
- 1
- Required. The installation program prompts you for a cluster name.
- 2
- The cluster network plugin to install. The supported values are
OVNKubernetesandOpenShiftSDN. The default value isOVNKubernetes. - 3
- Optional. Specify parameters for machine pools that do not define their own platform configuration.
- 4
- Required. The installation program prompts you for the region to deploy the cluster to.
- 5
- Optional. Specify an existing resource group where the cluster should be installed.
- 7
- Required. The installation program prompts you for the pull secret.
- 8
- Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster.
- 6
- Optional. These are example vswitchID values.
4.5.5.5. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.5.6. Cluster Network Operator configuration Copia collegamentoCollegamento copiato negli appunti!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:
clusterNetwork- IP address pools from which pod IP addresses are allocated.
serviceNetwork- IP address pool for services.
defaultNetwork.type- Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
4.5.6.1. Cluster Network Operator configuration object Copia collegamentoCollegamento copiato negli appunti!
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the CNO object. This name is always |
|
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14
You can customize this field only in the |
|
|
| Configures the Container Network Interface (CNI) cluster network provider for the cluster network. |
|
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Either Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. |
|
|
| This object is only valid for the OpenShift SDN cluster network provider. |
|
|
| This object is only valid for the OVN-Kubernetes cluster network provider. |
Configuration for the OpenShift SDN CNI cluster network provider
The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
Configuration for the OVN-Kubernetes CNI cluster network provider
The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
|
The port to use for all Geneve packets. The default value is |
|
|
| Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. |
|
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
|
| Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.
|
| Field | Type | Description |
|---|---|---|
|
| integer |
The maximum number of messages to generate every second per node. The default value is |
|
| integer |
The maximum size for the audit log in bytes. The default value is |
|
| string | One of the following additional audit log targets:
|
|
| string |
The syslog facility, such as |
| Field | Type | Description |
|---|---|---|
|
|
|
Set this field to
This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to |
Example OVN-Kubernetes configuration with IPSec enabled
kubeProxyConfig object configuration
The values for the kubeProxyConfig object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
|
The minimum duration before refreshing kubeProxyConfig:
proxyArguments:
iptables-min-sync-period:
- 0s
|
4.5.7. Specifying advanced network configuration Copia collegamentoCollegamento copiato negli appunti!
You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Prerequisites
-
You have created the
install-config.yamlfile and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable IPsec for the OVN-Kubernetes network provider
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files.
4.5.8. Configuring hybrid networking with OVN-Kubernetes Copia collegamentoCollegamento copiato negli appunti!
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.
Prerequisites
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR cannot overlap with theclusterNetworkCIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
4.5.9. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
4.5.10. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
4.5.11. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.12. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.5.13. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.5.14. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validate an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
4.6. Installing a cluster on Alibaba Cloud into an existing VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster into an existing Alibaba Virtual Private Cloud (VPC) on Alibaba Cloud Services. The installation program provisions the required infrastructure, which can then be customized. To customize the VPC installation, modify the parameters in the 'install-config.yaml' file before you install the cluster.
The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes.
Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You registered your domain.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain Resource Access Management (RAM) credentials.
4.6.2. Using a custom VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in the Alibaba Cloud Platform. By deploying OpenShift Container Platform into an existing Alibaba VPC, you can avoid limit constraints in new accounts and more easily adhere to your organization’s operational constraints. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking using vSwitches.
4.6.2.1. Requirements for using your VPC Copia collegamentoCollegamento copiato negli appunti!
The union of the VPC CIDR block and the machine network CIDR must be non-empty. The vSwitches must be within the machine network.
The installation program does not create the following components:
- VPC
- vSwitches
- Route table
- NAT gateway
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
4.6.2.2. VPC validation Copia collegamentoCollegamento copiato negli appunti!
To ensure that the vSwitches you provide are suitable, the installation program confirms the following data:
- All the vSwitches that you specify must exist.
- You have provided one or more vSwitches for control plane machines and compute machines.
- The vSwitches' CIDRs belong to the machine CIDR that you specified.
4.6.2.3. Division of permissions Copia collegamentoCollegamento copiato negli appunti!
Some individuals can create different resources in your cloud than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components, such as VPCs or vSwitches.
4.6.2.4. Isolation between clusters Copia collegamentoCollegamento copiato negli appunti!
If you deploy OpenShift Container Platform into an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed to the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
4.6.3. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.6.4. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.6.5. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.6.5.1. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select alibabacloud as the platform to target.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Provide a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the
install-config.yamlfile to set thecredentialsModeparameter toManual:Example install-config.yaml configuration file with
credentialsModeset toManualCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add this line to set the
credentialsModetoManual.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.6.5.2. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
4.6.5.2.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.6.5.2.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.6.5.2.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to Important
If the value of the field is set to |
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
4.6.5.2.4. Additional Alibaba Cloud configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Additional Alibaba Cloud configuration parameters are described in the following table. The alibabacloud parameters are the configuration used when installing on Alibaba Cloud. The defaultMachinePlatform parameters are the default configuration used when installing on Alibaba Cloud for machine pools that do not define their own platform configuration.
These parameters apply to both compute machines and control plane machines where specified.
If defined, the parameters compute.platform.alibabacloud and controlPlane.platform.alibabacloud will overwrite platform.alibabacloud.defaultMachinePlatform settings for compute machines and control plane machines respectively.
| Parameter | Description | Values |
|---|---|---|
|
| The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. | String. |
|
|
InstanceType defines the ECS instance type. Example: | String. |
|
|
Defines the category of the system disk. Examples: | String. |
|
| Defines the size of the system disk in gibibytes (GiB). | Integer. |
|
|
The list of availability zones that can be used. Examples: | String list. |
|
| The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. | String. |
|
|
InstanceType defines the ECS instance type. Example: | String. |
|
|
Defines the category of the system disk. Examples: | String. |
|
| Defines the size of the system disk in gibibytes (GiB). | Integer. |
|
|
The list of availability zones that can be used. Examples: | String list. |
|
| Required.The Alibaba Cloud region where the cluster will be created. | String. |
|
| The ID of an already existing resource group where the cluster will be installed. If empty, the installer will create a new resource group for the cluster. | String. |
|
| Additional keys and values to apply to all Alibaba Cloud resources created for the cluster. | Object. |
|
| The ID of an already existing VPC where the cluster should be installed. If empty, the installer will create a new VPC for the cluster. | String. |
|
| The ID list of already existing VSwitches where cluster resources will be created. The existing VSwitches can only be used when also using existing VPC. If empty, the installer will create new VSwitches for the cluster. | String list. |
|
| For both compute machines and control plane machines, the image ID that should be used to create ECS instance. If set, the image ID should belong to the same region as the cluster. | String. |
|
|
For both compute machines and control plane machines, the ECS instance type used to create the ECS instance. Example: | String. |
|
|
For both compute machines and control plane machines, the category of the system disk. Examples: |
String, for example "", |
|
|
For both compute machines and control plane machines, the size of the system disk in gibibytes (GiB). The minimum is | Integer. |
|
|
For both compute machines and control plane machines, the list of availability zones that can be used. Examples: | String list. |
|
| The ID of an existing private zone into which to add DNS records for the cluster’s internal API. An existing private zone can only be used when also using existing VPC. The private zone must be associated with the VPC containing the subnets. Leave the private zone unset to have the installer create the private zone on your behalf. | String. |
4.6.5.3. Sample customized install-config.yaml file for Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your cluster’s platform or modify the values of the required parameters.
- 1
- Required. The installation program prompts you for a cluster name.
- 2
- The cluster network plugin to install. The supported values are
OVNKubernetesandOpenShiftSDN. The default value isOVNKubernetes. - 3
- Optional. Specify parameters for machine pools that do not define their own platform configuration.
- 4
- Required. The installation program prompts you for the region to deploy the cluster to.
- 5
- Optional. Specify an existing resource group where the cluster should be installed.
- 7
- Required. The installation program prompts you for the pull secret.
- 8
- Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster.
- 6
- Optional. These are example vswitchID values.
4.6.5.4. Generating the required installation manifests Copia collegamentoCollegamento copiato negli appunti!
You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
Procedure
Generate the manifests by running the following command from the directory that contains the installation program:
openshift-install create manifests --dir <installation_directory>
$ openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>- Specifies the directory in which the installation program creates files.
4.6.5.5. Configuring the Cloud Credential Operator utility Copia collegamentoCollegamento copiato negli appunti!
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.
The ccoctl utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc).
Procedure
Obtain the OpenShift Container Platform release image:
RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the CCO container image from the OpenShift Container Platform release image:
CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE)
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE)Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the architecture of the
$RELEASE_IMAGEmatches the architecture of the environment in which you will use theccoctltool.Extract the
ccoctlbinary from the CCO container image within the OpenShift Container Platform release image:oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change the permissions to make
ccoctlexecutable:chmod 775 ccoctl
$ chmod 775 ccoctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that
ccoctlis ready to use, display the help file:ccoctl --help
$ ccoctl --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Output of
ccoctl --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6.5.6. Creating credentials for OpenShift Container Platform components with the ccoctl tool Copia collegamentoCollegamento copiato negli appunti!
You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component.
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctlbinary. - Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster.
-
Added the AccessKeyID (
access_key_id) and AccessKeySecret (access_key_secret) of that RAM user into the~/.alibabacloud/credentialsfile on your local computer.
Procedure
Set the
$RELEASE_IMAGEvariable by running the following command:RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ $RELEASE_IMAGE
$ oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \1 $RELEASE_IMAGECopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
credrequestsis the directory where the list ofCredentialsRequestobjects is stored. This command creates the directory if it does not exist.
NoteThis command can take a few moments to run.
If your cluster uses cluster capabilities to disable one or more optional components, delete the
CredentialsRequestcustom resources for any disabled components.Example
credrequestsdirectory contents for OpenShift Container Platform 4.12 on Alibaba Cloud0000_30_machine-api-operator_00_credentials-request.yaml 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml
0000_30_machine-api-operator_00_credentials-request.yaml1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ccoctltool to process allCredentialsRequestobjects in thecredrequestsdirectory:Run the following command to use the tool:
ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir>
$ ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
-
<name>is the name used to tag any cloud resources that are created for tracking. -
<alibaba_region>is the Alibaba Cloud region in which cloud resources will be created. -
<path_to_directory_with_list_of_credentials_requests>/credrequestsis the directory containing the files for the componentCredentialsRequestobjects. -
<path_to_ccoctl_output_dir>is the directory where the generated component credentials secrets will be placed.
NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA RAM user can have up to two AccessKeys at the same time. If you run
ccoctl alibabacloud create-ram-usersmore than twice, the previous generated manifests secret becomes stale and you must reapply the newly generated secrets.-
Verify that the OpenShift Container Platform secrets are created:
ls <path_to_ccoctl_output_dir>/manifests
$ ls <path_to_ccoctl_output_dir>/manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml
openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies.
Copy the generated credential files to the target manifests directory:
cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/
$ cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<path_to_ccoctl_output_dir>-
Specifies the directory created by the
ccoctl alibabacloud create-ram-userscommand. <path_to_installation_dir>- Specifies the directory in which the installation program creates files.
4.6.6. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
4.6.7. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
4.6.8. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6.9. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.6.10. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.6.11. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
4.7. Uninstalling a cluster on Alibaba Cloud Copia collegamentoCollegamento copiato negli appunti!
You can remove a cluster that you deployed to Alibaba Cloud.
4.7.1. Removing a cluster that uses installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
You can remove a cluster that uses installer-provisioned infrastructure from your cloud.
After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access.
Prerequisites
- Have a copy of the installation program that you used to deploy the cluster.
- Have the files that the installation program generated when you created your cluster.
Procedure
From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:
./openshift-install destroy cluster \ --dir <installation_directory> --log-level info
$ ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must specify the directory that contains the cluster definition files for your cluster. The installation program requires the
metadata.jsonfile in this directory to delete the cluster.
-
Optional: Delete the
<installation_directory>directory and the OpenShift Container Platform installation program.
Chapter 5. Installing on AWS Copia collegamentoCollegamento copiato negli appunti!
5.1. Preparing to install on AWS Copia collegamentoCollegamento copiato negli appunti!
5.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
5.1.2. Requirements for installing OpenShift Container Platform on AWS Copia collegamentoCollegamento copiato negli appunti!
Before installing OpenShift Container Platform on Amazon Web Services (AWS), you must create an AWS account. See Configuring an AWS account for details about configuring an account, account limits, account permissions, IAM user setup, and supported AWS regions.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for AWS for other options, including configuring the Cloud Credential Operator (CCO) to use the Amazon Web Services Security Token Service (AWS STS).
5.1.3. Choosing a method to install OpenShift Container Platform on AWS Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.
See Installation process for more information about installer-provisioned and user-provisioned installation processes.
5.1.3.1. Installing a cluster on installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
You can install a cluster on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
- Installing a cluster quickly on AWS: You can install OpenShift Container Platform on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options.
- Installing a customized cluster on AWS: You can install a customized cluster on AWS infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
- Installing a cluster on AWS with network customizations: You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements.
- Installing a cluster on AWS in a restricted network: You can install OpenShift Container Platform on AWS on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components.
- Installing a cluster on an existing Virtual Private Cloud: You can install OpenShift Container Platform on an existing AWS Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure.
- Installing a private cluster on an existing VPC: You can install a private cluster on an existing AWS VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet.
- Installing a cluster on AWS into a government or secret region: OpenShift Container Platform can be deployed into AWS regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud.
5.1.3.2. Installing a cluster on user-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
You can install a cluster on AWS infrastructure that you provision, by using one of the following methods:
- Installing a cluster on AWS infrastructure that you provide: You can install OpenShift Container Platform on AWS infrastructure that you provide. You can use the provided CloudFormation templates to create stacks of AWS resources that represent each of the components required for an OpenShift Container Platform installation.
- Installing a cluster on AWS in a restricted network with user-provisioned infrastructure: You can install OpenShift Container Platform on AWS infrastructure that you provide by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the AWS APIs.
5.1.4. Next steps Copia collegamentoCollegamento copiato negli appunti!
5.2. Configuring an AWS account Copia collegamentoCollegamento copiato negli appunti!
Before you can install OpenShift Container Platform, you must configure an Amazon Web Services (AWS) account.
5.2.1. Configuring Route 53 Copia collegamentoCollegamento copiato negli appunti!
To install OpenShift Container Platform, the Amazon Web Services (AWS) account you use must have a dedicated public hosted zone in your Route 53 service. This zone must be authoritative for the domain. The Route 53 service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through AWS or another source.
NoteIf you purchase a new domain through AWS, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through AWS, see Registering Domain Names Using Amazon Route 53 in the AWS documentation.
- If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon Route 53 the DNS Service for an Existing Domain in the AWS documentation.
Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation.
Use an appropriate root domain, such as
openshiftcorp.com, or subdomain, such asclusters.openshiftcorp.com.- Extract the new authoritative name servers from the hosted zone records. See Getting the Name Servers for a Public Hosted Zone in the AWS documentation.
- Update the registrar records for the AWS Route 53 name servers that your domain uses. For example, if you registered your domain to a Route 53 service in a different accounts, see the following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records.
- If you are using a subdomain, add its delegation records to the parent domain. This gives Amazon Route 53 responsibility for the subdomain. Follow the delegation procedure outlined by the DNS provider of the parent domain. See Creating a subdomain that uses Amazon Route 53 as the DNS service without migrating the parent domain in the AWS documentation for an example high level procedure.
5.2.1.1. Ingress Operator endpoint configuration for AWS Route 53 Copia collegamentoCollegamento copiato negli appunti!
If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region, the Ingress Operator uses us-gov-west-1 region for Route53 and tagging API clients.
The Ingress Operator uses https://tagging.us-gov-west-1.amazonaws.com as the tagging API endpoint if a tagging custom endpoint is configured that includes the string 'us-gov-east-1'.
For more information on AWS GovCloud (US) endpoints, see the Service Endpoints in the AWS documentation about GovCloud (US).
Private, disconnected installations are not supported for AWS GovCloud when you install in the us-gov-east-1 region.
Example Route 53 configuration
- 1
- Route 53 defaults to
https://route53.us-gov.amazonaws.comfor both AWS GovCloud (US) regions. - 2
- Only the US-West region has endpoints for tagging. Omit this parameter if your cluster is in another region.
5.2.2. AWS account limits Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) components, and the default Service Limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain AWS regions, or run multiple clusters from your account, you might need to request additional resources for your AWS account.
The following table summarizes the AWS components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
| Component | Number of clusters available by default | Default AWS limit | Description |
|---|---|---|---|
| Instance Limits | Varies | Varies | By default, each cluster creates the following instances:
These instance type counts are within a new account’s default limit. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need.
In most regions, the worker machines use an |
| Elastic IPs (EIPs) | 0 to 1 | 5 EIPs per account | To provision the cluster in a highly available configuration, the installation program creates a public and private subnet for each availability zone within a region. Each private subnet requires a NAT Gateway, and each NAT gateway requires a separate elastic IP. Review the AWS region map to determine how many availability zones are in each region. To take advantage of the default high availability, install the cluster in a region with at least three availability zones. To install a cluster in a region with more than five availability zones, you must increase the EIP limit. Important
To use the |
| Virtual Private Clouds (VPCs) | 5 | 5 VPCs per region | Each cluster creates its own VPC. |
| Elastic Load Balancing (ELB/NLB) | 3 | 20 per region |
By default, each cluster creates internal and external network load balancers for the master API server and a single Classic Load Balancer for the router. Deploying more Kubernetes |
| NAT Gateways | 5 | 5 per availability zone | The cluster deploys one NAT gateway in each availability zone. |
| Elastic Network Interfaces (ENIs) | At least 12 | 350 per region |
The default installation creates 21 ENIs and an ENI for each availability zone in your region. For example, the Additional ENIs are created for additional machines and ELB load balancers that are created by cluster usage and deployed workloads. |
| VPC Gateway | 20 | 20 per account | Each cluster creates a single VPC Gateway for S3 access. |
| S3 buckets | 99 | 100 buckets per account | Because the installation process creates a temporary bucket and the registry component in each cluster creates a bucket, you can create only 99 OpenShift Container Platform clusters per AWS account. |
| Security Groups | 250 | 2,500 per account | Each cluster creates 10 distinct security groups. |
5.2.3. Required AWS permissions for the IAM user Copia collegamentoCollegamento copiato negli appunti!
Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.
When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions:
Example 5.1. Required EC2 permissions for installation
-
ec2:AuthorizeSecurityGroupEgress -
ec2:AuthorizeSecurityGroupIngress -
ec2:CopyImage -
ec2:CreateNetworkInterface -
ec2:AttachNetworkInterface -
ec2:CreateSecurityGroup -
ec2:CreateTags -
ec2:CreateVolume -
ec2:DeleteSecurityGroup -
ec2:DeleteSnapshot -
ec2:DeleteTags -
ec2:DeregisterImage -
ec2:DescribeAccountAttributes -
ec2:DescribeAddresses -
ec2:DescribeAvailabilityZones -
ec2:DescribeDhcpOptions -
ec2:DescribeImages -
ec2:DescribeInstanceAttribute -
ec2:DescribeInstanceCreditSpecifications -
ec2:DescribeInstances -
ec2:DescribeInstanceTypes -
ec2:DescribeInternetGateways -
ec2:DescribeKeyPairs -
ec2:DescribeNatGateways -
ec2:DescribeNetworkAcls -
ec2:DescribeNetworkInterfaces -
ec2:DescribePrefixLists -
ec2:DescribeRegions -
ec2:DescribeRouteTables -
ec2:DescribeSecurityGroups -
ec2:DescribeSubnets -
ec2:DescribeTags -
ec2:DescribeVolumes -
ec2:DescribeVpcAttribute -
ec2:DescribeVpcClassicLink -
ec2:DescribeVpcClassicLinkDnsSupport -
ec2:DescribeVpcEndpoints -
ec2:DescribeVpcs -
ec2:GetEbsDefaultKmsKeyId -
ec2:ModifyInstanceAttribute -
ec2:ModifyNetworkInterfaceAttribute -
ec2:RevokeSecurityGroupEgress -
ec2:RevokeSecurityGroupIngress -
ec2:RunInstances -
ec2:TerminateInstances
Example 5.2. Required permissions for creating network resources during installation
-
ec2:AllocateAddress -
ec2:AssociateAddress -
ec2:AssociateDhcpOptions -
ec2:AssociateRouteTable -
ec2:AttachInternetGateway -
ec2:CreateDhcpOptions -
ec2:CreateInternetGateway -
ec2:CreateNatGateway -
ec2:CreateRoute -
ec2:CreateRouteTable -
ec2:CreateSubnet -
ec2:CreateVpc -
ec2:CreateVpcEndpoint -
ec2:ModifySubnetAttribute -
ec2:ModifyVpcAttribute
If you use an existing VPC, your account does not require these permissions for creating network resources.
Example 5.3. Required Elastic Load Balancing permissions (ELB) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer -
elasticloadbalancing:AttachLoadBalancerToSubnets -
elasticloadbalancing:ConfigureHealthCheck -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateLoadBalancerListeners -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterInstancesFromLoadBalancer -
elasticloadbalancing:DescribeInstanceHealth -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTags -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:RegisterInstancesWithLoadBalancer -
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
Example 5.4. Required Elastic Load Balancing permissions (ELBv2) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:CreateListener -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateTargetGroup -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterTargets -
elasticloadbalancing:DescribeListeners -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTargetGroupAttributes -
elasticloadbalancing:DescribeTargetHealth -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:ModifyTargetGroup -
elasticloadbalancing:ModifyTargetGroupAttributes -
elasticloadbalancing:RegisterTargets
Example 5.5. Required IAM permissions for installation
-
iam:AddRoleToInstanceProfile -
iam:CreateInstanceProfile -
iam:CreateRole -
iam:DeleteInstanceProfile -
iam:DeleteRole -
iam:DeleteRolePolicy -
iam:GetInstanceProfile -
iam:GetRole -
iam:GetRolePolicy -
iam:GetUser -
iam:ListInstanceProfilesForRole -
iam:ListRoles -
iam:ListUsers -
iam:PassRole -
iam:PutRolePolicy -
iam:RemoveRoleFromInstanceProfile -
iam:SimulatePrincipalPolicy -
iam:TagRole
If you have not created an elastic load balancer (ELB) in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission.
Example 5.6. Required Route 53 permissions for installation
-
route53:ChangeResourceRecordSets -
route53:ChangeTagsForResource -
route53:CreateHostedZone -
route53:DeleteHostedZone -
route53:GetChange -
route53:GetHostedZone -
route53:ListHostedZones -
route53:ListHostedZonesByName -
route53:ListResourceRecordSets -
route53:ListTagsForResource -
route53:UpdateHostedZoneComment
Example 5.7. Required S3 permissions for installation
-
s3:CreateBucket -
s3:DeleteBucket -
s3:GetAccelerateConfiguration -
s3:GetBucketAcl -
s3:GetBucketCors -
s3:GetBucketLocation -
s3:GetBucketLogging -
s3:GetBucketObjectLockConfiguration -
s3:GetBucketReplication -
s3:GetBucketRequestPayment -
s3:GetBucketTagging -
s3:GetBucketVersioning -
s3:GetBucketWebsite -
s3:GetEncryptionConfiguration -
s3:GetLifecycleConfiguration -
s3:GetReplicationConfiguration -
s3:ListBucket -
s3:PutBucketAcl -
s3:PutBucketTagging -
s3:PutEncryptionConfiguration
Example 5.8. S3 permissions that cluster Operators require
-
s3:DeleteObject -
s3:GetObject -
s3:GetObjectAcl -
s3:GetObjectTagging -
s3:GetObjectVersion -
s3:PutObject -
s3:PutObjectAcl -
s3:PutObjectTagging
Example 5.9. Required permissions to delete base cluster resources
-
autoscaling:DescribeAutoScalingGroups -
ec2:DeleteNetworkInterface -
ec2:DeleteVolume -
elasticloadbalancing:DeleteTargetGroup -
elasticloadbalancing:DescribeTargetGroups -
iam:DeleteAccessKey -
iam:DeleteUser -
iam:ListAttachedRolePolicies -
iam:ListInstanceProfiles -
iam:ListRolePolicies -
iam:ListUserPolicies -
s3:DeleteObject -
s3:ListBucketVersions -
tag:GetResources
Example 5.10. Required permissions to delete network resources
-
ec2:DeleteDhcpOptions -
ec2:DeleteInternetGateway -
ec2:DeleteNatGateway -
ec2:DeleteRoute -
ec2:DeleteRouteTable -
ec2:DeleteSubnet -
ec2:DeleteVpc -
ec2:DeleteVpcEndpoints -
ec2:DetachInternetGateway -
ec2:DisassociateRouteTable -
ec2:ReleaseAddress -
ec2:ReplaceRouteTableAssociation
If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources.
Example 5.11. Required permissions to delete a cluster with shared instance roles
-
iam:UntagRole
Example 5.12. Additional IAM and S3 permissions that are required to create manifests
-
iam:DeleteAccessKey -
iam:DeleteUser -
iam:DeleteUserPolicy -
iam:GetUserPolicy -
iam:ListAccessKeys -
iam:PutUserPolicy -
iam:TagUser -
s3:PutBucketPublicAccessBlock -
s3:GetBucketPublicAccessBlock -
s3:PutLifecycleConfiguration -
s3:HeadBucket -
s3:ListBucketMultipartUploads -
s3:AbortMultipartUpload
If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions.
Example 5.13. Optional permissions for instance and quota checks for installation
-
ec2:DescribeInstanceTypeOfferings -
servicequotas:ListAWSDefaultServiceQuotas
5.2.4. Creating an IAM user Copia collegamentoCollegamento copiato negli appunti!
Each Amazon Web Services (AWS) account contains a root user account that is based on the email address you used to create the account. This is a highly-privileged account, and it is recommended to use it for only initial account and billing configuration, creating an initial set of users, and securing the account.
Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the following options:
Procedure
-
Specify the IAM user name and select
Programmatic access. Attach the
AdministratorAccesspolicy to ensure that the account has sufficient permission to create the cluster. This policy provides the cluster with the ability to grant credentials to each OpenShift Container Platform component. The cluster grants the components only the credentials that they require.NoteWhile it is possible to create a policy that grants the all of the required AWS permissions and attach it to the user, this is not the preferred option. The cluster will not have the ability to grant additional credentials to individual components, so the same credentials are used by all components.
- Optional: Add metadata to the user by attaching tags.
-
Confirm that the user name that you specified is granted the
AdministratorAccesspolicy. Record the access key ID and secret access key values. You must use these values when you configure your local machine to run the installation program.
ImportantYou cannot use a temporary session token that you generated while using a multi-factor authentication device to authenticate to AWS when you deploy a cluster. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials.
5.2.5. IAM Policies and AWS authentication Copia collegamentoCollegamento copiato negli appunti!
By default, the installation program creates instance profiles for the bootstrap, control plane, and compute instances with the necessary permissions for the cluster to operate.
However, you can create your own IAM roles and specify them as part of the installation process. You might need to specify your own roles to deploy the cluster or to manage the cluster after installation. For example:
- Your organization’s security policies require that you use a more restrictive set of permissions to install the cluster.
- After the installation, the cluster is configured with an Operator that requires access to additional services.
If you choose to specify your own IAM roles, you can take the following steps:
- Begin with the default policies and adapt as required. For more information, see "Default permissions for IAM instance profiles".
- Use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template that is based on the cluster’s activity. For more information see, "Using AWS IAM Analyzer to create policy templates".
5.2.5.1. Default permissions for IAM instance profiles Copia collegamentoCollegamento copiato negli appunti!
By default, the installation program creates IAM instance profiles for the bootstrap, control plane and worker instances with the necessary permissions for the cluster to operate.
The following lists specify the default permissions for control plane and compute machines:
Example 5.14. Default IAM role permissions for control plane instance profiles
-
ec2:AttachVolume -
ec2:AuthorizeSecurityGroupIngress -
ec2:CreateSecurityGroup -
ec2:CreateTags -
ec2:CreateVolume -
ec2:DeleteSecurityGroup -
ec2:DeleteVolume -
ec2:Describe* -
ec2:DetachVolume -
ec2:ModifyInstanceAttribute -
ec2:ModifyVolume -
ec2:RevokeSecurityGroupIngress -
elasticloadbalancing:AddTags -
elasticloadbalancing:AttachLoadBalancerToSubnets -
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer -
elasticloadbalancing:CreateListener -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateLoadBalancerPolicy -
elasticloadbalancing:CreateLoadBalancerListeners -
elasticloadbalancing:CreateTargetGroup -
elasticloadbalancing:ConfigureHealthCheck -
elasticloadbalancing:DeleteListener -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeleteLoadBalancerListeners -
elasticloadbalancing:DeleteTargetGroup -
elasticloadbalancing:DeregisterInstancesFromLoadBalancer -
elasticloadbalancing:DeregisterTargets -
elasticloadbalancing:Describe* -
elasticloadbalancing:DetachLoadBalancerFromSubnets -
elasticloadbalancing:ModifyListener -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:ModifyTargetGroup -
elasticloadbalancing:ModifyTargetGroupAttributes -
elasticloadbalancing:RegisterInstancesWithLoadBalancer -
elasticloadbalancing:RegisterTargets -
elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer -
elasticloadbalancing:SetLoadBalancerPoliciesOfListener -
kms:DescribeKey
Example 5.15. Default IAM role permissions for compute instance profiles
-
ec2:DescribeInstances -
ec2:DescribeRegions
5.2.5.2. Specifying an existing IAM role Copia collegamentoCollegamento copiato negli appunti!
Instead of allowing the installation program to create IAM instance profiles with the default permissions, you can use the install-config.yaml file to specify an existing IAM role for control plane and compute instances.
Prerequisites
-
You have an existing
install-config.yamlfile.
Procedure
Update
compute.platform.aws.iamRolewith an existing role for the control plane machines.Sample
install-config.yamlfile with an IAM role for compute instancesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update
controlPlane.platform.aws.iamRolewith an existing role for the compute machines.Sample
install-config.yamlfile with an IAM role for control plane instancesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing the OpenShift Container Platform cluster.
5.2.5.3. Using AWS IAM Analyzer to create policy templates Copia collegamentoCollegamento copiato negli appunti!
The minimal set of permissions that the control plane and compute instance profiles require depends on how the cluster is configured for its daily operation.
One way to determine which permissions the cluster instances require is to use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template:
- A policy template contains the permissions the cluster has used over a specified period of time.
- You can then use the template to create policies with fine-grained permissions.
Procedure
The overall process could be:
- Ensure that CloudTrail is enabled. CloudTrail records all of the actions and events in your AWS account, including the API calls that are required to create a policy template. For more information, see the AWS documentation for working with CloudTrail.
- Create an instance profile for control plane instances and an instance profile for compute instances. Be sure to assign each role a permissive policy, such as PowerUserAccess. For more information, see the AWS documentation for creating instance profile roles.
- Install the cluster in a development environment and configure it as required. Be sure to deploy all of applications the cluster will host in a production environment.
- Test the cluster thoroughly. Testing the cluster ensures that all of the required API calls are logged.
- Use the IAM Access Analyzer to create a policy template for each instance profile. For more information, see the AWS documentation for generating policies based on the CloudTrail logs.
- Create and add a fine-grained policy to each instance profile.
- Remove the permissive policy from each instance profile.
- Deploy a production cluster using the existing instance profiles with the new policies.
You can add IAM Conditions to your policy to make it more restrictive and compliant with your organization security requirements.
5.2.6. Supported AWS Marketplace regions Copia collegamentoCollegamento copiato negli appunti!
Installing an OpenShift Container Platform cluster using an AWS Marketplace image is available to customers who purchase the offer in North America.
While the offer must be purchased in North America, you can deploy the cluster to any of the following supported paritions:
- Public
- GovCloud
Deploying a OpenShift Container Platform cluster using an AWS Marketplace image is not supported for the AWS secret regions or China regions.
5.2.7. Supported AWS regions Copia collegamentoCollegamento copiato negli appunti!
You can deploy an OpenShift Container Platform cluster to the following regions.
Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.
5.2.7.1. AWS public regions Copia collegamentoCollegamento copiato negli appunti!
The following AWS public regions are supported:
-
af-south-1(Cape Town) -
ap-east-1(Hong Kong) -
ap-northeast-1(Tokyo) -
ap-northeast-2(Seoul) -
ap-northeast-3(Osaka) -
ap-south-1(Mumbai) -
ap-southeast-1(Singapore) -
ap-southeast-2(Sydney) -
ca-central-1(Central) -
eu-central-1(Frankfurt) -
eu-north-1(Stockholm) -
eu-south-1(Milan) -
eu-west-1(Ireland) -
eu-west-2(London) -
eu-west-3(Paris) -
me-south-1(Bahrain) -
sa-east-1(São Paulo) -
us-east-1(N. Virginia) -
us-east-2(Ohio) -
us-west-1(N. California) -
us-west-2(Oregon)
5.2.7.2. AWS GovCloud regions Copia collegamentoCollegamento copiato negli appunti!
The following AWS GovCloud regions are supported:
-
us-gov-west-1 -
us-gov-east-1
5.2.7.3. AWS C2S Secret region Copia collegamentoCollegamento copiato negli appunti!
The us-iso-east-1 region is supported.
5.2.7.4. AWS China regions Copia collegamentoCollegamento copiato negli appunti!
The following AWS China regions are supported:
-
cn-north-1(Beijing) -
cn-northwest-1(Ningxia)
5.2.8. Next steps Copia collegamentoCollegamento copiato negli appunti!
Install an OpenShift Container Platform cluster:
- Quickly install a cluster with default options on installer-provisioned infrastructure
- Install a cluster with cloud customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
- Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
5.3. Manually creating IAM for AWS Copia collegamentoCollegamento copiato negli appunti!
In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster.
5.3.1. Alternatives to storing administrator-level secrets in the kube-system project Copia collegamentoCollegamento copiato negli appunti!
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file.
If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can choose one of the following options when installing OpenShift Container Platform:
Use the Amazon Web Services Security Token Service:
You can use the CCO utility (
ccoctl) to configure the cluster to use the Amazon Web Services Security Token Service (AWS STS). When the CCO utility is used to configure the cluster for STS, it assigns IAM roles that provide short-term, limited-privilege security credentials to components.NoteThis credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature.
Manage cloud credentials manually:
You can set the
credentialsModeparameter for the CCO toManualto manage cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them.Remove the administrator-level credential secret after installing OpenShift Container Platform with mint mode:
If you are using the CCO with the
credentialsModeparameter set toMint, you can remove or rotate the administrator-level credential after installing OpenShift Container Platform. Mint mode is the default configuration for the CCO. This option requires the presence of the administrator-level credential during an installation. The administrator-level credential is used during the installation to mint other credentials with some permissions granted. The original credential secret is not stored in the cluster permanently.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
- To learn how to rotate or remove the administrator-level credential secret after installing OpenShift Container Platform, see Rotating or removing cloud provider credentials.
- For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator.
5.3.2. Manually create IAM Copia collegamentoCollegamento copiato negli appunti!
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
Procedure
Change to the directory that contains the installation program and create the
install-config.yamlfile by running the following command:openshift-install create install-config --dir <installation_directory>
$ openshift-install create install-config --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<installation_directory>is the directory in which the installation program creates files.Edit the
install-config.yamlconfiguration file so that it contains thecredentialsModeparameter set toManual.Example
install-config.yamlconfiguration fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This line is added to set the
credentialsModeparameter toManual.
Generate the manifests by running the following command from the directory that contains the installation program:
openshift-install create manifests --dir <installation_directory>
$ openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<installation_directory>is the directory in which the installation program creates files.From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your
openshift-installbinary is built to use by running the following command:openshift-install version
$ openshift-install versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64
release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Locate all
CredentialsRequestobjects in this release image that target the cloud you are deploying on by running the following command:oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=aws
$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=awsCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates a YAML file for each
CredentialsRequestobject.Sample
CredentialsRequestobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject.Sample
CredentialsRequestobject with secretsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample
SecretobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe release image includes
CredentialsRequestobjects for Technology Preview features that are enabled by theTechPreviewNoUpgradefeature set. You can identify these objects by their use of therelease.openshift.io/feature-gate: TechPreviewNoUpgradeannotation.- If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail.
- If you are using any of these features, you must create secrets for the corresponding objects.
To find
CredentialsRequestobjects with theTechPreviewNoUpgradeannotation, run the following command:grep "release.openshift.io/feature-gate" *
$ grep "release.openshift.io/feature-gate" *Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade
0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
From the directory that contains the installation program, proceed with your cluster creation:
openshift-install create cluster --dir <installation_directory>
$ openshift-install create cluster --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBefore upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
5.3.3. Mint mode Copia collegamentoCollegamento copiato negli appunti!
Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS and GCP.
In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions.
The benefits of mint mode include:
- Each cluster component has only the permissions it requires
- Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades
One drawback is that mint mode requires admin credential storage in a cluster kube-system secret.
5.3.4. Mint mode with removal or rotation of the administrator-level credential Copia collegamentoCollegamento copiato negli appunti!
Currently, this mode is only supported on AWS and GCP.
In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation.
The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
The administrator-level credential is not stored in the cluster permanently.
Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade.
5.3.5. Next steps Copia collegamentoCollegamento copiato negli appunti!
Install an OpenShift Container Platform cluster:
- Installing a cluster quickly on AWS with default options on installer-provisioned infrastructure
- Install a cluster with cloud customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
- Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
5.4. Installing a cluster quickly on AWS Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options.
5.4.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials. Manual mode can also be used in environments where the cloud IAM APIs are not reachable.
5.4.2. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.4.3. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.4.4. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.4.5. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select aws as the platform to target.
If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
NoteThe AWS access key ID and secret access key are stored in
~/.aws/credentialsin the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.4.6. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.4.7. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.8. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
5.4.9. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.4.10. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
5.5. Installing a cluster on AWS with customizations Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a customized cluster on infrastructure that the installation program provisions on Amazon Web Services (AWS). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes.
5.5.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.5.2. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.5.3. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.5.4. Obtaining an AWS Marketplace image Copia collegamentoCollegamento copiato negli appunti!
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.
Prerequisites
- You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Procedure
- Complete the OpenShift Container Platform subscription from the AWS Marketplace.
-
Record the AMI ID for your specific region. As part of the installation process, you must update the
install-config.yamlfile with this value before deploying the cluster.
Sample install-config.yaml file with AWS Marketplace worker nodes
5.5.5. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.5.6. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.5.6.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.5.6.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.5.6.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.5.6.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.5.6.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.5.6.2. Minimum resource requirements for cluster installation Copia collegamentoCollegamento copiato negli appunti!
Each cluster machine must meet the following minimum requirements:
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
|---|---|---|---|---|---|
| Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
5.5.6.3. Tested instance types for AWS Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.16. Machine types based on 64-bit x86 architecture
-
c4.* -
c5.* -
c5a.* -
i3.* -
m4.* -
m5.* -
m5a.* -
m6i.* -
r4.* -
r5.* -
r5a.* -
r6i.* -
t3.* -
t3a.*
5.5.6.4. Tested instance types for AWS on 64-bit ARM infrastructures Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.17. Machine types based on 64-bit ARM architecture
-
c6g.* -
m6g.*
5.5.6.5. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
- 1 10 11 16
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 13
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 15
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
5.5.6.6. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
You have added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.5.7. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.5.8. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.5.9. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.10. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
5.5.11. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.5.12. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
5.6. Installing a cluster on AWS with network customizations Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.
5.6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.6.2. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.6.3. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.6.4. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.6.5. Network configuration phases Copia collegamentoCollegamento copiato negli appunti!
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork networking.machineNetworkFor more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetworkto match the CIDR that the preferred NIC resides in.ImportantThe CIDR range
172.17.0.0/16is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.
5.6.6. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.6.6.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.6.6.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.6.6.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.6.6.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.6.6.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.6.6.2. Minimum resource requirements for cluster installation Copia collegamentoCollegamento copiato negli appunti!
Each cluster machine must meet the following minimum requirements:
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
|---|---|---|---|---|---|
| Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
5.6.6.3. Tested instance types for AWS Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.18. Machine types based on 64-bit x86 architecture
-
c4.* -
c5.* -
c5a.* -
i3.* -
m4.* -
m5.* -
m5a.* -
m6i.* -
r4.* -
r5.* -
r5a.* -
r6i.* -
t3.* -
t3a.*
5.6.6.4. Tested instance types for AWS on 64-bit ARM infrastructures Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.19. Machine types based on 64-bit ARM architecture
-
c6g.* -
m6g.*
5.6.6.5. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
- 1 10 12 17
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7 11
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 16
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
5.6.6.6. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
You have added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.6.7. Cluster Network Operator configuration Copia collegamentoCollegamento copiato negli appunti!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:
clusterNetwork- IP address pools from which pod IP addresses are allocated.
serviceNetwork- IP address pool for services.
defaultNetwork.type- Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
5.6.7.1. Cluster Network Operator configuration object Copia collegamentoCollegamento copiato negli appunti!
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the CNO object. This name is always |
|
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14
You can customize this field only in the |
|
|
| Configures the Container Network Interface (CNI) cluster network provider for the cluster network. |
|
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Either Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. |
|
|
| This object is only valid for the OpenShift SDN cluster network provider. |
|
|
| This object is only valid for the OVN-Kubernetes cluster network provider. |
Configuration for the OpenShift SDN CNI cluster network provider
The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
Configuration for the OVN-Kubernetes CNI cluster network provider
The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
|
The port to use for all Geneve packets. The default value is |
|
|
| Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. |
|
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
|
| Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.
|
| Field | Type | Description |
|---|---|---|
|
| integer |
The maximum number of messages to generate every second per node. The default value is |
|
| integer |
The maximum size for the audit log in bytes. The default value is |
|
| string | One of the following additional audit log targets:
|
|
| string |
The syslog facility, such as |
| Field | Type | Description |
|---|---|---|
|
|
|
Set this field to
This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to |
Example OVN-Kubernetes configuration with IPSec enabled
kubeProxyConfig object configuration
The values for the kubeProxyConfig object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
|
The minimum duration before refreshing kubeProxyConfig:
proxyArguments:
iptables-min-sync-period:
- 0s
|
5.6.8. Specifying advanced network configuration Copia collegamentoCollegamento copiato negli appunti!
You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Prerequisites
-
You have created the
install-config.yamlfile and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable IPsec for the OVN-Kubernetes network provider
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files.
For more information on using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer.
5.6.9. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster Copia collegamentoCollegamento copiato negli appunti!
You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster.
Prerequisites
-
Create the
install-config.yamlfile and complete any modifications to it.
Procedure
Create an Ingress Controller backed by an AWS NLB on a new cluster.
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a file that is named
cluster-ingress-default-ingresscontroller.yamlin the<installation_directory>/manifests/directory:touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name that contains themanifests/directory for your cluster.
After creating the file, several network configuration files are in the
manifests/directory, as shown:ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
cluster-ingress-default-ingresscontroller.yaml
cluster-ingress-default-ingresscontroller.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
cluster-ingress-default-ingresscontroller.yamlfile in an editor and enter a custom resource (CR) that describes the Operator configuration you want:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
cluster-ingress-default-ingresscontroller.yamlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-ingress-default-ingresscontroller.yamlfile. The installation program deletes themanifests/directory when creating the cluster.
5.6.10. Configuring hybrid networking with OVN-Kubernetes Copia collegamentoCollegamento copiato negli appunti!
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.
Prerequisites
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR cannot overlap with theclusterNetworkCIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
5.6.11. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.6.12. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.6.13. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.14. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
5.6.15. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.6.16. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
5.7. Installing a cluster on AWS in a restricted network Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC).
5.7.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You mirrored the images for a disconnected installation to your registry and obtained the
imageContentSourcesdata for your version of OpenShift Container Platform.ImportantBecause the installation media is on the mirror host, you can use that computer to complete all installation steps.
You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements:
- Contains the mirror registry
- Has firewall rules or a peering connection to access the mirror registry hosted elsewhere
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
NoteIf you are configuring a proxy, be sure to also review this site list.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.7.2. About installations in restricted networks Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
5.7.2.1. Additional limits Copia collegamentoCollegamento copiato negli appunti!
Clusters in restricted networks have the following additional limitations and restrictions:
-
The
ClusterVersionstatus includes anUnable to retrieve available updateserror. - By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
5.7.3. About using a custom VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
5.7.3.1. Requirements for using your VPC Copia collegamentoCollegamento copiato negli appunti!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
5.7.3.2. VPC validation Copia collegamentoCollegamento copiato negli appunti!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
5.7.3.3. Division of permissions Copia collegamentoCollegamento copiato negli appunti!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
5.7.3.4. Isolation between clusters Copia collegamentoCollegamento copiato negli appunti!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
5.7.4. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.7.5. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.7.6. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
-
Have the
imageContentSourcesvalues that were generated during mirror registry creation. - Obtain the contents of the certificate for your mirror registry.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Edit the
install-config.yamlfile to give the additional information that is required for an installation in a restricted network.Update the
pullSecretvalue to contain the authentication information for your registry:pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<mirror_host_name>, specify the registry domain name that you specified in the certificate for your mirror registry, and for<credentials>, specify the base64-encoded user name and password for your mirror registry.Add the
additionalTrustBundleparameter and value.additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----
additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.
Define the subnets for the VPC to install the cluster in:
subnets: - subnet-1 - subnet-2 - subnet-3
subnets: - subnet-1 - subnet-2 - subnet-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the image content resources, which resemble the following YAML excerpt:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For these values, use the
imageContentSourcesthat you recorded during mirror registry creation.
-
Make any other modifications to the
install-config.yamlfile that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.7.6.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.7.6.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.7.6.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.7.6.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.7.6.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.7.6.2. Minimum resource requirements for cluster installation Copia collegamentoCollegamento copiato negli appunti!
Each cluster machine must meet the following minimum requirements:
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
|---|---|---|---|---|---|
| Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
5.7.6.3. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
- 1 10 11
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 18
- For
<local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For exampleregistry.example.comorregistry.example.com:5000. For<credentials>, specify the base64-encoded user name and password for your mirror registry. - 19
- Provide the contents of the certificate file that you used for your mirror registry.
- 20
- Provide the
imageContentSourcessection from the output of the command to mirror the repository.
5.7.6.4. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
You have added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.7.7. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.7.8. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.7.9. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.10. Disabling the default OperatorHub sources Copia collegamentoCollegamento copiato negli appunti!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources.
5.7.11. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.7.12. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validate an installation.
- Customize your cluster.
-
Configure image streams for the Cluster Samples Operator and the
must-gathertool. - Learn how to use Operator Lifecycle Manager (OLM) on restricted networks.
- If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores.
- If necessary, you can opt out of remote health reporting.
5.8. Installing a cluster on AWS into an existing VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
5.8.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.8.2. About using a custom VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
5.8.2.1. Requirements for using your VPC Copia collegamentoCollegamento copiato negli appunti!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation.
Record each subnet ID. Completing the installation requires that you enter these values in the
platformsection of theinstall-config.yamlfile. See Finding a subnet ID in the AWS documentation.-
The VPC’s CIDR block must contain the
Networking.MachineCIDRrange, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone:
- The public subnet requires a route to the internet gateway.
- The public subnet requires a NAT gateway with an EIP address.
- The private subnet requires a route to the NAT gateway in public subnet.
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
5.8.2.2. VPC validation Copia collegamentoCollegamento copiato negli appunti!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
5.8.2.3. Division of permissions Copia collegamentoCollegamento copiato negli appunti!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
5.8.2.4. Isolation between clusters Copia collegamentoCollegamento copiato negli appunti!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
5.8.3. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.8.4. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.8.5. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.8.6. Creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.8.6.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.8.6.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.8.6.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.8.6.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.8.6.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.8.6.2. Minimum resource requirements for cluster installation Copia collegamentoCollegamento copiato negli appunti!
Each cluster machine must meet the following minimum requirements:
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
|---|---|---|---|---|---|
| Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
5.8.6.3. Tested instance types for AWS Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.20. Machine types based on 64-bit x86 architecture
-
c4.* -
c5.* -
c5a.* -
i3.* -
m4.* -
m5.* -
m5a.* -
m6i.* -
r4.* -
r5.* -
r5a.* -
r6i.* -
t3.* -
t3a.*
5.8.6.4. Tested instance types for AWS on 64-bit ARM infrastructures Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.21. Machine types based on 64-bit ARM architecture
-
c6g.* -
m6g.*
5.8.6.5. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
- 1 10 11 18
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
5.8.6.6. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
You have added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.8.7. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.8.8. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.8.9. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.10. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
5.8.11. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.8.12. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
5.9. Installing a private cluster on AWS Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a private cluster into an existing VPC on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
5.9.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.9.2. Private clusters Copia collegamentoCollegamento copiato negli appunti!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
To deploy a private cluster, you must:
- Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Deploy from a machine that has access to:
- The API services for the cloud to which you provision.
- The hosts on the network that you provision.
- The internet to obtain installation media.
You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
5.9.2.1. Private clusters in AWS Copia collegamentoCollegamento copiato negli appunti!
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.
The cluster still requires access to internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
- Public subnets
- Public load balancers, which support public ingress
-
A public Route 53 zone that matches the
baseDomainfor the cluster
The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
5.9.2.1.1. Limitations Copia collegamentoCollegamento copiato negli appunti!
The ability to add public functionality to a private cluster is limited.
- You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).
-
If you use a public Service type load balancer, you must tag a public subnet in each availability zone with
kubernetes.io/cluster/<cluster-infra-id>: sharedso that AWS can use them to create public load balancers.
5.9.3. About using a custom VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
5.9.3.1. Requirements for using your VPC Copia collegamentoCollegamento copiato negli appunti!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
5.9.3.2. VPC validation Copia collegamentoCollegamento copiato negli appunti!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
5.9.3.3. Division of permissions Copia collegamentoCollegamento copiato negli appunti!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
5.9.3.4. Isolation between clusters Copia collegamentoCollegamento copiato negli appunti!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
5.9.4. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.9.5. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.9.6. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.9.7. Manually creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file.
Prerequisites
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
mkdir <installation_directory>
$ mkdir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.NoteFor some platform types, you can alternatively run
./openshift-install create install-config --dir <installation_directory>to generate aninstall-config.yamlfile. You can provide details about your cluster configuration at the prompts.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
5.9.7.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.9.7.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.9.7.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.9.7.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.9.7.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.9.7.2. Minimum resource requirements for cluster installation Copia collegamentoCollegamento copiato negli appunti!
Each cluster machine must meet the following minimum requirements:
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
|---|---|---|---|---|---|
| Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
5.9.7.3. Tested instance types for AWS Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.22. Machine types based on 64-bit x86 architecture
-
c4.* -
c5.* -
c5a.* -
i3.* -
m4.* -
m5.* -
m5a.* -
m6i.* -
r4.* -
r5.* -
r5a.* -
r6i.* -
t3.* -
t3a.*
5.9.7.4. Tested instance types for AWS on 64-bit ARM infrastructures Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.23. Machine types based on 64-bit ARM architecture
-
c6g.* -
m6g.*
5.9.7.5. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
- 1 10 11 19
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 18
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal.
5.9.7.6. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
You have added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.9.8. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.9.9. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.9.10. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.11. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
5.9.12. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.9.13. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
5.10. Installing a cluster on AWS into a government region Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster on Amazon Web Services (AWS) into a government region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster.
5.10.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.10.2. AWS government regions Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform supports deploying a cluster to an AWS GovCloud (US) region.
The following AWS GovCloud partitions are supported:
-
us-gov-east-1 -
us-gov-west-1
5.10.3. Installation requirements Copia collegamentoCollegamento copiato negli appunti!
Before you can install the cluster, you must:
Provide an existing private AWS VPC and subnets to host the cluster.
Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region.
-
Manually create the installation configuration file (
install-config.yaml).
5.10.4. Private clusters Copia collegamentoCollegamento copiato negli appunti!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
Public zones are not supported in Route 53 in an AWS GovCloud Region. Therefore, clusters must be private if they are deployed to an AWS GovCloud Region.
By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
To deploy a private cluster, you must:
- Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Deploy from a machine that has access to:
- The API services for the cloud to which you provision.
- The hosts on the network that you provision.
- The internet to obtain installation media.
You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
5.10.4.1. Private clusters in AWS Copia collegamentoCollegamento copiato negli appunti!
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.
The cluster still requires access to internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
- Public subnets
- Public load balancers, which support public ingress
-
A public Route 53 zone that matches the
baseDomainfor the cluster
The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
5.10.4.1.1. Limitations Copia collegamentoCollegamento copiato negli appunti!
The ability to add public functionality to a private cluster is limited.
- You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).
-
If you use a public Service type load balancer, you must tag a public subnet in each availability zone with
kubernetes.io/cluster/<cluster-infra-id>: sharedso that AWS can use them to create public load balancers.
5.10.5. About using a custom VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
5.10.5.1. Requirements for using your VPC Copia collegamentoCollegamento copiato negli appunti!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
5.10.5.2. VPC validation Copia collegamentoCollegamento copiato negli appunti!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
5.10.5.3. Division of permissions Copia collegamentoCollegamento copiato negli appunti!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
5.10.5.4. Isolation between clusters Copia collegamentoCollegamento copiato negli appunti!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
5.10.6. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.10.7. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.10.8. Obtaining an AWS Marketplace image Copia collegamentoCollegamento copiato negli appunti!
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.
Prerequisites
- You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Procedure
- Complete the OpenShift Container Platform subscription from the AWS Marketplace.
-
Record the AMI ID for your specific region. As part of the installation process, you must update the
install-config.yamlfile with this value before deploying the cluster.
Sample install-config.yaml file with AWS Marketplace worker nodes
5.10.9. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.10.10. Manually creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
Installing the cluster requires that you manually generate the installation configuration file.
Prerequisites
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
mkdir <installation_directory>
$ mkdir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
5.10.10.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.10.10.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.10.10.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.10.10.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.10.10.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.10.10.2. Minimum resource requirements for cluster installation Copia collegamentoCollegamento copiato negli appunti!
Each cluster machine must meet the following minimum requirements:
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
|---|---|---|---|---|---|
| Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
| Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
5.10.10.3. Tested instance types for AWS Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.24. Machine types based on 64-bit x86 architecture
-
c4.* -
c5.* -
c5a.* -
i3.* -
m4.* -
m5.* -
m5a.* -
m6i.* -
r4.* -
r5.* -
r5a.* -
r6i.* -
t3.* -
t3a.*
5.10.10.4. Tested instance types for AWS on 64-bit ARM infrastructures Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.25. Machine types based on 64-bit ARM architecture
-
c6g.* -
m6g.*
5.10.10.5. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually.
- 1 10 11 19
- Required.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 18
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal.
5.10.10.6. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
You have added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.NoteIf the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:./openshift-install wait-for install-complete --log-level debug
$ ./openshift-install wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.10.11. Deploying the cluster Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
5.10.12. Installing the OpenShift CLI by downloading the binary Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the OpenShift CLI, it is available using the oc command:
oc <command>
$ oc <command>
5.10.13. Logging in to the cluster by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10.14. Logging in to the cluster by using the web console Copia collegamentoCollegamento copiato negli appunti!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:cat <installation_directory>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
oc get routes -n openshift-console | grep 'console-openshift'
$ oc get routes -n openshift-console | grep 'console-openshift'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
5.10.15. Telemetry access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.10.16. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
5.11. Installing a cluster on AWS into a Top Secret Region Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform version 4.10, you can install a cluster on Amazon Web Services (AWS) into a Commercial Cloud Services (C2S) Top Secret Region. To configure the region, modify parameters in the install config.yaml file before you install the cluster.
5.11.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.11.2. AWS Top Secret Region Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform supports deploying a cluster to an AWS Commercial Cloud Services (C2S) Top Secret Region.
The C2S Top Secret Region does not have a published Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) to select, so you must upload a custom AMI that belongs to that region.
The following AWS Top Secret Region partition is supported:
-
us-iso-east-1
The maximum supported MTU in an AWS Top Secret Region is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations
5.11.3. Installation requirements Copia collegamentoCollegamento copiato negli appunti!
Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amzaon Machine Image for the AWS Top Secret Region.
Before you can install the cluster, you must:
- Upload a custom RHCOS AMI.
-
Manually create the installation configuration file (
install-config.yaml). - Specify the AWS region, and the accompanying custom AMI, in the installation configuration file.
You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI.
You must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file.
5.11.4. Private clusters Copia collegamentoCollegamento copiato negli appunti!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
Public zones are not supported in Route 53 in an AWS Top Secret Region. Therefore, clusters must be private if they are deployed to an AWS Top Secret Region.
By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
To deploy a private cluster, you must:
- Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Deploy from a machine that has access to:
- The API services for the cloud to which you provision.
- The hosts on the network that you provision.
- The internet to obtain installation media.
You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
5.11.4.1. Private clusters in AWS Copia collegamentoCollegamento copiato negli appunti!
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.
The cluster still requires access to internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
- Public subnets
- Public load balancers, which support public ingress
-
A public Route 53 zone that matches the
baseDomainfor the cluster
The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
5.11.4.1.1. Limitations Copia collegamentoCollegamento copiato negli appunti!
The ability to add public functionality to a private cluster is limited.
- You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).
-
If you use a public Service type load balancer, you must tag a public subnet in each availability zone with
kubernetes.io/cluster/<cluster-infra-id>: sharedso that AWS can use them to create public load balancers.
5.11.5. About using a custom VPC Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
5.11.5.1. Requirements for using your VPC Copia collegamentoCollegamento copiato negli appunti!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.A cluster in a Top Secret Region is unable to reach the public IP addresses for the EC2 and ELB endpoints. You must create a VPC endpoint and attach it to the subnet that the clusters are using. Name the endpoints as follows:
-
elasticloadbalancing.<region>.c2s.ic.gov -
ec2.<region>.c2s.ic.gov -
s3.<region>.c2s.ic.gov
-
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
5.11.5.2. VPC validation Copia collegamentoCollegamento copiato negli appunti!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
5.11.5.3. Division of permissions Copia collegamentoCollegamento copiato negli appunti!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
5.11.5.4. Isolation between clusters Copia collegamentoCollegamento copiato negli appunti!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
5.11.6. Internet access for OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.11.7. Uploading a custom RHCOS AMI in AWS Copia collegamentoCollegamento copiato negli appunti!
If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region.
Prerequisites
- You configured an AWS account.
- You created an Amazon S3 bucket with the required IAM service role.
- You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.
Procedure
Export your AWS profile as an environment variable:
export AWS_PROFILE=<aws_profile>
$ export AWS_PROFILE=<aws_profile>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the region to associate with your custom AMI as an environment variable:
export AWS_DEFAULT_REGION=<aws_region>
$ export AWS_DEFAULT_REGION=<aws_region>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the version of RHCOS you uploaded to Amazon S3 as an environment variable:
export RHCOS_VERSION=<version>
$ export RHCOS_VERSION=<version>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the Amazon S3 bucket name as an environment variable:
export VMIMPORT_BUCKET_NAME=<s3_bucket_name>
$ export VMIMPORT_BUCKET_NAME=<s3_bucket_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
containers.jsonfile and define your RHCOS VMDK file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the RHCOS disk as an Amazon EBS snapshot:
aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \ --description "<description>" \ --disk-container "file://<file_path>/containers.json"$ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \ --description "<description>" \1 --disk-container "file://<file_path>/containers.json"2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the image import:
watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the
SnapshotIdto register the image.Create a custom RHCOS AMI from the RHCOS snapshot:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs.
5.11.8. Generating a key pair for cluster node SSH access Copia collegamentoCollegamento copiato negli appunti!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
cat <path>/<file_name>.pub
$ cat <path>/<file_name>.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:cat ~/.ssh/id_ed25519.pub
$ cat ~/.ssh/id_ed25519.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:eval "$(ssh-agent -s)"
$ eval "$(ssh-agent -s)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Agent pid 31874
Agent pid 31874Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:ssh-add <path>/<file_name>
$ ssh-add <path>/<file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.11.9. Obtaining the installation program Copia collegamentoCollegamento copiato negli appunti!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.11.10. Manually creating the installation configuration file Copia collegamentoCollegamento copiato negli appunti!
Installing the cluster requires that you manually generate the installation configuration file.
Prerequisites
- You have uploaded a custom RHCOS AMI.
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
mkdir <installation_directory>
$ mkdir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
5.11.10.1. Installation configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
5.11.10.1.1. Required configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.11.10.1.2. Network configuration parameters Copia collegamentoCollegamento copiato negli appunti!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.11.10.1.3. Optional configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. |
|
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
|
Enable or disable FIPS mode. The default is Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3>
|
5.11.10.1.4. Optional AWS configuration parameters Copia collegamentoCollegamento copiato negli appunti!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
5.11.10.2. Tested instance types for AWS Copia collegamentoCollegamento copiato negli appunti!
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 5.26. Machine types based on 64-bit x86 architecture for secret regions
-
c4.* -
c5.* -
i3.* -
m4.* -
m5.* -
r4.* -
r5.* -
t3.*
5.11.10.3. Sample customized install-config.yaml file for AWS Copia collegamentoCollegamento copiato negli appunti!
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually.
- 1 10 11 13 20
- Required.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 14
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 15
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 16
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 17
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 18
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 19
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal. - 21
- The custom CA certificate. This is required when deploying to the AWS C2S Top Secret Region because the AWS API requires a custom CA trust bundle.
5.11.10.4. Configuring the cluster-wide proxy during installation Copia collegamentoCollegamento copiato negli appunti!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow