Chapter 11. Installing a cluster on AWS China
In OpenShift Container Platform version 4.15, you can install a cluster to the following Amazon Web Services (AWS) China regions:
-
cn-north-1
(Beijing) -
cn-northwest-1
(Ningxia)
11.1. Prerequisites
- You have an Internet Content Provider (ICP) license.
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an AWS account to host the cluster.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
11.2. Installation requirements
Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for the AWS China regions.
Before you can install the cluster, you must:
- Upload a custom RHCOS AMI.
-
Manually create the installation configuration file (
install-config.yaml
). - Specify the AWS region, and the accompanying custom AMI, in the installation configuration file.
You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI.
11.3. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.15, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
11.4. Private clusters
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
To deploy a private cluster, you must:
- Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Deploy from a machine that has access to:
- The API services for the cloud to which you provision.
- The hosts on the network that you provision.
- The internet to obtain installation media.
You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network.
AWS China does not support a VPN connection between the VPC and your network. For more information about the Amazon VPC service in the Beijing and Ningxia regions, see Amazon Virtual Private Cloud in the AWS China documentation.
11.4.1. Private clusters in AWS
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.
The cluster still requires access to internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
- Public subnets
- Public load balancers, which support public ingress
-
A public Route 53 zone that matches the
baseDomain
for the cluster
The installation program does use the baseDomain
that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
11.4.1.1. Limitations
The ability to add public functionality to a private cluster is limited.
- You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).
-
If you use a public Service type load balancer, you must tag a public subnet in each availability zone with
kubernetes.io/cluster/<cluster-infra-id>: shared
so that AWS can use them to create public load balancers.
11.5. About using a custom VPC
In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
11.5.1. Requirements for using your VPC
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: owned
,Name
, andopenshift.io/cluster
tags.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: shared
tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use aName
tag, because it overlaps with the EC2Name
field and the installation fails.-
If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the
kubernetes.io/cluster/unmanaged: true
tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the
enableDnsSupport
andenableDnsHostnames
attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZone
andplatform.aws.hostedZoneRole
fields in theinstall-config.yaml
file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use thePassthrough
orManual
credentials mode.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available:
Option 1: Create VPC endpoints
Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
-
ec2.<aws_region>.amazonaws.com.cn
-
elasticloadbalancing.<aws_region>.amazonaws.com
-
s3.<aws_region>.amazonaws.com
With this option, network traffic remains private between your VPC and the required AWS services.
Option 2: Create a proxy without VPC endpoints
As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services.
Option 3: Create a proxy with VPC endpoints
As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
-
ec2.<aws_region>.amazonaws.com.cn
-
elasticloadbalancing.<aws_region>.amazonaws.com
-
s3.<aws_region>.amazonaws.com
When configuring the proxy in the install-config.yaml
file, add these endpoints to the noProxy
field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services.
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
Component | AWS type | Description | |
---|---|---|---|
VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
Network access control |
| You must allow the VPC to access the following ports: | |
Port | Reason | ||
| Inbound HTTP traffic | ||
| Inbound HTTPS traffic | ||
| Inbound SSH traffic | ||
| Inbound ephemeral traffic | ||
| Outbound ephemeral traffic | ||
Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. |
11.5.2. VPC validation
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared
tag is removed from the subnets that it used.
11.5.3. Division of permissions
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
11.5.4. Isolation between clusters
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
11.5.5. Optional: AWS security groups
By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified.
However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines.
As part of the installation process, you apply custom security groups by modifying the install-config.yaml
file before deploying the cluster.
For more information, see "Applying existing AWS security groups to the cluster".
11.6. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
11.7. Uploading a custom RHCOS AMI in AWS
If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region.
Prerequisites
- You configured an AWS account.
- You created an Amazon S3 bucket with the required IAM service role.
- You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.
Procedure
Export your AWS profile as an environment variable:
$ export AWS_PROFILE=<aws_profile> 1
- 1
- The AWS profile name that holds your AWS credentials, like
beijingadmin
.
Export the region to associate with your custom AMI as an environment variable:
$ export AWS_DEFAULT_REGION=<aws_region> 1
- 1
- The AWS region, like
cn-north-1
.
Export the version of RHCOS you uploaded to Amazon S3 as an environment variable:
$ export RHCOS_VERSION=<version> 1
- 1
- The RHCOS VMDK version, like
4.15.0
.
Export the Amazon S3 bucket name as an environment variable:
$ export VMIMPORT_BUCKET_NAME=<s3_bucket_name>
Create the
containers.json
file and define your RHCOS VMDK file:$ cat <<EOF > containers.json { "Description": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "${VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF
Import the RHCOS disk as an Amazon EBS snapshot:
$ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2
Check the status of the image import:
$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}
Example output
{ "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] }
Copy the
SnapshotId
to register the image.Create a custom RHCOS AMI from the RHCOS snapshot:
$ aws ec2 register-image \ --region ${AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4
To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs.
11.8. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Procedure
- Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
11.9. Manually creating the installation configuration file
Installing the cluster requires that you manually create the installation configuration file.
Prerequisites
- You have uploaded a custom RHCOS AMI.
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yaml
file template that is provided and save it in the<installation_directory>
.NoteYou must name this configuration file
install-config.yaml
.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the next step of the installation process. You must back it up now.
Additional resources
11.9.1. Sample customized install-config.yaml file for AWS
You can customize the installation configuration file (install-config.yaml
) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually.
apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24
- 1 12 14 17 24
- Required.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the
kube-system
namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. - 3 8 15
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlane
section is a single mapping, but thecompute
section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Only one control plane pool is used. - 5 9
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlarge
orm5.2xlarge
, for your machines if you disable simultaneous multithreading. - 6 10
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1
and setiops
to2000
. - 7 11
- Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to
Required
. To allow the use of both IMDSv1 and IMDSv2, set the parameter value toOptional
. If no value is specified, both IMDSv1 and IMDSv2 are allowed.NoteThe IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets.
- 13
- The cluster network plugin to install. The default value
OVNKubernetes
is the only supported value. - 16
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 18
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 19
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
https
protocol and the host must trust the certificate. - 20
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 21
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- 22
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses. - 23
- How to publish the user-facing endpoints of your cluster. Set
publish
toInternal
to deploy a private cluster, which cannot be accessed from the internet. The default value isExternal
.
11.9.2. Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
Compute | RHCOS, RHEL 8.6 and later [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
- x86-64 architecture requires x86-64-v2 ISA
- ARM64 architecture requires ARMv8.0-A ISA
- IBM Power architecture requires Power 9 ISA
- s390x architecture requires z14 ISA
For more information, see RHEL Architectures.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Additional resources
11.9.3. Tested instance types for AWS
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation".
Example 11.1. Machine types based on 64-bit x86 architecture
-
c4.*
-
c5.*
-
c5a.*
-
i3.*
-
m4.*
-
m5.*
-
m5a.*
-
m6i.*
-
r4.*
-
r5.*
-
r5a.*
-
r6i.*
-
t3.*
-
t3a.*
11.9.4. Tested instance types for AWS on 64-bit ARM infrastructures
The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Example 11.2. Machine types based on 64-bit ARM architecture
-
c6g.*
-
m6g.*
-
r8g.*
11.9.5. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. If you have added the AmazonEC2
,Elastic Load Balancing
, andS3
VPC endpoints to your VPC, you must add these endpoints to thenoProxy
field. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete --log-level debug
- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
11.9.6. Applying existing AWS security groups to the cluster
Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines.
Prerequisites
- You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups.
- The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC.
-
You have an existing
install-config.yaml
file.
Procedure
-
In the
install-config.yaml
file, edit thecompute.platform.aws.additionalSecurityGroupIDs
parameter to specify one or more custom security groups for your compute machines. -
Edit the
controlPlane.platform.aws.additionalSecurityGroupIDs
parameter to specify one or more custom security groups for your control plane machines. - Save the file and reference it when deploying the cluster.
Sample install-config.yaml
file that specifies custom security groups
# ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3
11.10. Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc
.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant drop-down list.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.15 Linux Clients entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.15 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
Verification
After you install the OpenShift CLI, it is available using the
oc
command:C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.15 macOS Clients entry and save the file.
NoteFor macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
Verification
Verify your installation by using an
oc
command:$ oc <command>
11.11. Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
- To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
- To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials.
11.11.1. Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
custom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
This command creates a YAML file for each
CredentialsRequest
object.Sample
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ...
Create YAML files for secrets in the
openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretRef
for eachCredentialsRequest
object.Sample
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ...
Sample
Secret
objectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
11.11.2. Configuring an AWS cluster to use short-term credentials
To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster.
11.11.2.1. Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The ccoctl
utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc
).
You have created an AWS account for the
ccoctl
utility to use with the following permissions:Example 11.3. Required AWS permissions
Required
iam
permissions-
iam:CreateOpenIDConnectProvider
-
iam:CreateRole
-
iam:DeleteOpenIDConnectProvider
-
iam:DeleteRole
-
iam:DeleteRolePolicy
-
iam:GetOpenIDConnectProvider
-
iam:GetRole
-
iam:GetUser
-
iam:ListOpenIDConnectProviders
-
iam:ListRolePolicies
-
iam:ListRoles
-
iam:PutRolePolicy
-
iam:TagOpenIDConnectProvider
-
iam:TagRole
Required
s3
permissions-
s3:CreateBucket
-
s3:DeleteBucket
-
s3:DeleteObject
-
s3:GetBucketAcl
-
s3:GetBucketTagging
-
s3:GetObject
-
s3:GetObjectAcl
-
s3:GetObjectTagging
-
s3:ListBucket
-
s3:PutBucketAcl
-
s3:PutBucketPolicy
-
s3:PutBucketPublicAccessBlock
-
s3:PutBucketTagging
-
s3:PutObject
-
s3:PutObjectAcl
-
s3:PutObjectTagging
Required
cloudfront
permissions-
cloudfront:ListCloudFrontOriginAccessIdentities
-
cloudfront:ListDistributions
-
cloudfront:ListTagsForResource
If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the
ccoctl
utility requires the following additional permissions:Example 11.4. Additional permissions for a private S3 bucket with CloudFront
-
cloudfront:CreateCloudFrontOriginAccessIdentity
-
cloudfront:CreateDistribution
-
cloudfront:DeleteCloudFrontOriginAccessIdentity
-
cloudfront:DeleteDistribution
-
cloudfront:GetCloudFrontOriginAccessIdentity
-
cloudfront:GetCloudFrontOriginAccessIdentityConfig
-
cloudfront:GetDistribution
-
cloudfront:TagResource
-
cloudfront:UpdateDistribution
NoteThese additional permissions support the use of the
--create-private-s3-bucket
option when processing credentials requests with theccoctl aws create-all
command.-
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
NoteEnsure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
Change the permissions to make
ccoctl
executable by running the following command:$ chmod 775 ccoctl
Verification
To verify that
ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctl.rhel9
Example output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
11.11.2.2. Creating AWS resources with the Cloud Credential Operator utility
You have the following options when creating AWS resources:
-
You can use the
ccoctl aws create-all
command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command. -
If you need to review the JSON files that the
ccoctl
tool creates before modifying AWS resources, or if the process theccoctl
tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually.
11.11.2.2.1. Creating AWS resources with a single command
If the process the ccoctl
tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all
command to automate the creation of AWS resources.
Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually".
By default, ccoctl
creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir
flag. This procedure uses <path_to_ccoctl_output_dir>
to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctl
binary.
Procedure
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
objects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
NoteThis command might take a few moments to run.
Use the
ccoctl
tool to process allCredentialsRequest
objects by running the following command:$ ccoctl aws create-all \ --name=<name> \1 --region=<aws_region> \2 --credentials-requests-dir=<path_to_credentials_requests_directory> \3 --output-dir=<path_to_ccoctl_output_dir> \4 --create-private-s3-bucket 5
- 1
- Specify the name used to tag any cloud resources that are created for tracking.
- 2
- Specify the AWS region in which cloud resources will be created.
- 3
- Specify the directory containing the files for the component
CredentialsRequest
objects. - 4
- Optional: Specify the directory in which you want the
ccoctl
utility to create objects. By default, the utility creates objects in the directory in which the commands are run. - 5
- Optional: By default, the
ccoctl
utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the--create-private-s3-bucket
parameter.
NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the--enable-tech-preview
parameter.
Verification
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifests
directory:$ ls <path_to_ccoctl_output_dir>/manifests
Example output
cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml
You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.
11.11.2.2.2. Creating AWS resources individually
You can use the ccoctl
tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments.
Otherwise, you can use the ccoctl aws create-all
command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command".
By default, ccoctl
creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir
flag. This procedure uses <path_to_ccoctl_output_dir>
to refer to this directory.
Some ccoctl
commands make AWS API calls to create or modify AWS resources. You can use the --dry-run
flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json
parameters.
Prerequisites
-
Extract and prepare the
ccoctl
binary.
Procedure
Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command:
$ ccoctl aws create-key-pair
Example output
2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer
where
serviceaccount-signer.private
andserviceaccount-signer.public
are the generated key files.This command also creates a private key that the cluster requires during installation in
/<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key
.Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command:
$ ccoctl aws create-identity-provider \ --name=<name> \1 --region=<aws_region> \2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3
Example output
2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
where
openid-configuration
is a discovery document andkeys.json
is a JSON web key set file.This command also creates a YAML configuration file in
/<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml
. This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens.Create IAM roles for each component in the cluster:
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
objects from the OpenShift Container Platform release image:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
Use the
ccoctl
tool to process allCredentialsRequest
objects by running the following command:$ ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
NoteFor AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the
--region
parameter.If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the--enable-tech-preview
parameter.For each
CredentialsRequest
object,ccoctl
creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in eachCredentialsRequest
object from the OpenShift Container Platform release image.
Verification
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifests
directory:$ ls <path_to_ccoctl_output_dir>/manifests
Example output
cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml
You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.
11.11.2.3. Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
-
You have configured the Cloud Credential Operator utility (
ccoctl
). -
You have created the cloud provider resources that are required for your cluster with the
ccoctl
utility.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Copy the manifests that the
ccoctl
utility generated to themanifests
directory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the
tls
directory that contains the private key to the installation directory:$ cp -a /<path_to_ccoctl_output_dir>/tls .
11.12. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
Optional: Remove or disable the
AdministratorAccess
policy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccess
policy are required only during installation.
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
11.13. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
11.14. Logging in to the cluster by using the web console
The kubeadmin
user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin
user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadmin
user from thekubeadmin-password
file on the installation host:$ cat <installation_directory>/auth/kubeadmin-password
NoteAlternatively, you can obtain the
kubeadmin
password from the<installation_directory>/.openshift_install.log
log file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'
NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.log
log file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadmin
user.
11.15. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
- See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
- See About remote health monitoring for more information about the Telemetry service.
11.16. Next steps
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.