此内容没有您所选择的语言版本。
Chapter 1. Installing on AWS
1.1. Configuring an AWS account
Before you can install OpenShift Container Platform, you must configure an Amazon Web Services (AWS) account.
1.1.1. Configuring Route53
To install OpenShift Container Platform, the Amazon Web Services (AWS) account you use must have a dedicated public hosted zone in your Route53 service. This zone must be authoritative for the domain. The Route53 service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through AWS or another source.
NoteIf you purchase a new domain through AWS, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through AWS, see Registering Domain Names Using Amazon Route 53 in the AWS documentation.
- If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon Route 53 the DNS Service for an Existing Domain in the AWS documentation.
Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation.
Use an appropriate root domain, such as
openshiftcorp.com
, or subdomain, such asclusters.openshiftcorp.com
.- Extract the new authoritative name servers from the hosted zone records. See Getting the Name Servers for a Public Hosted Zone in the AWS documentation.
- Update the registrar records for the AWS Route53 name servers that your domain uses. For example, if you registered your domain to a Route53 service in a different accounts, see the following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records.
- If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain.
1.1.2. AWS account limits
The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) components, and the default Service Limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain AWS regions, or run multiple clusters from your account, you might need to request additional resources for you AWS account.
The following table summarizes the AWS components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
Component | Number of clusters available by default | Default AWS limit | Description |
---|---|---|---|
Instance Limits | Varies | Varies | By default, each cluster creates the following instances:
These instance type counts are within a new account’s default limit. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need.
In most regions, the bootstrap and worker machines uses an |
Elastic IPs (EIPs) | 0 to 1 | 5 EIPs per account | To provision the cluster in a highly available configuration, the installation program creates a public and private subnet for each availability zone within a region. Each private subnet requires a NAT Gateway, and each NAT gateway requires a separate elastic IP. Review the AWS region map to determine how many availability zones are in each region. To take advantage of the default high availability, install the cluster in a region with at least three availability zones. To install a cluster in a region with more than five availability zones, you must increase the EIP limit. Important
To use the |
Virtual Private Clouds (VPCs) | 5 | 5 VPCs per region | Each cluster creates its own VPC. |
Elastic Load Balancing (ELB/NLB) | 3 | 20 per region | By default, each cluster creates an internal and external network load balancers for the master API server and a single classic elastic load balancer for the router. Deploying more Kubernetes LoadBalancer Service objects will create additional load balancers. |
NAT Gateways | 5 | 5 per availability zone | The cluster deploys one NAT gateway in each availability zone. |
Elastic Network Interfaces (ENIs) | At least 12 | 350 per region |
The default installation creates 21 ENIs and an ENI for each availability zone in your region. For example, the Additional ENIs are created for additional machines and elastic load balancers that are created by cluster usage and deployed workloads. |
VPC Gateway | 20 | 20 per account | Your AWS account uses VPC Gateways for S3 access. Each cluster creates a single VPC Gateway for S3 access. |
S3 buckets | 99 | 100 buckets per account | Because the installation process creates a temporary bucket and the registry component in each cluster creates a bucket, you can create only 99 OpenShift Container Platform clusters per AWS account. |
Security Groups | 250 | 2,500 per account | Each cluster creates 10 distinct security groups. |
1.1.3. Required AWS permissions
When you attach the AdministratorAccess
policy to the IAM user that you create, you grant that user all of the required permissions. To deploy an OpenShift Container Platform cluster, the IAM user requires the following permissions:
Required EC2 permissions for installation
-
ec2:AllocateAddress
-
ec2:AssociateAddress
-
ec2:AssociateDhcpOptions
-
ec2:AssociateRouteTable
-
ec2:AttachInternetGateway
-
ec2:AuthorizeSecurityGroupEgress
-
ec2:AuthorizeSecurityGroupIngress
-
ec2:CopyImage
-
ec2:CreateDhcpOptions
-
ec2:CreateInternetGateway
-
ec2:CreateNatGateway
-
ec2:CreateRoute
-
ec2:CreateRouteTable
-
ec2:CreateSecurityGroup
-
ec2:CreateSubnet
-
ec2:CreateTags
-
ec2:CreateVpc
-
ec2:CreateVpcEndpoint
-
ec2:CreateVolume
-
ec2:DescribeAccountAttributes
-
ec2:DescribeAddresses
-
ec2:DescribeAvailabilityZones
-
ec2:DescribeDhcpOptions
-
ec2:DescribeImages
-
ec2:DescribeInstanceAttribute
-
ec2:DescribeInstanceCreditSpecifications
-
ec2:DescribeInstances
-
ec2:DescribeInternetGateways
-
ec2:DescribeKeyPairs
-
ec2:DescribeNatGateways
-
ec2:DescribeNetworkAcls
-
ec2:DescribePrefixLists
-
ec2:DescribeRegions
-
ec2:DescribeRouteTables
-
ec2:DescribeSecurityGroups
-
ec2:DescribeSubnets
-
ec2:DescribeTags
-
ec2:DescribeVpcEndpoints
-
ec2:DescribeVpcs
-
ec2:DescribeVpcAttribute
-
ec2:DescribeVolumes
-
ec2:DescribeVpcClassicLink
-
ec2:DescribeVpcClassicLinkDnsSupport
-
ec2:ModifyInstanceAttribute
-
ec2:ModifySubnetAttribute
-
ec2:ModifyVpcAttribute
-
ec2:RevokeSecurityGroupEgress
-
ec2:RunInstances
-
ec2:TerminateInstances
-
ec2:RevokeSecurityGroupIngress
-
ec2:ReplaceRouteTableAssociation
-
ec2:DescribeNetworkInterfaces
-
ec2:ModifyNetworkInterfaceAttribute
Required Elasticloadbalancing permissions for installation
-
elasticloadbalancing:AddTags
-
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer
-
elasticloadbalancing:AttachLoadBalancerToSubnets
-
elasticloadbalancing:CreateListener
-
elasticloadbalancing:CreateLoadBalancer
-
elasticloadbalancing:CreateLoadBalancerListeners
-
elasticloadbalancing:CreateTargetGroup
-
elasticloadbalancing:ConfigureHealthCheck
-
elasticloadbalancing:DeregisterInstancesFromLoadBalancer
-
elasticloadbalancing:DeregisterTargets
-
elasticloadbalancing:DescribeInstanceHealth
-
elasticloadbalancing:DescribeListeners
-
elasticloadbalancing:DescribeLoadBalancers
-
elasticloadbalancing:DescribeLoadBalancerAttributes
-
elasticloadbalancing:DescribeTags
-
elasticloadbalancing:DescribeTargetGroupAttributes
-
elasticloadbalancing:DescribeTargetHealth
-
elasticloadbalancing:ModifyLoadBalancerAttributes
-
elasticloadbalancing:ModifyTargetGroup
-
elasticloadbalancing:ModifyTargetGroupAttributes
-
elasticloadbalancing:RegisterTargets
-
elasticloadbalancing:RegisterInstancesWithLoadBalancer
-
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
Required IAM permissions for installation
-
iam:AddRoleToInstanceProfile
-
iam:CreateInstanceProfile
-
iam:CreateRole
-
iam:DeleteInstanceProfile
-
iam:DeleteRole
-
iam:DeleteRolePolicy
-
iam:GetInstanceProfile
-
iam:GetRole
-
iam:GetRolePolicy
-
iam:GetUser
-
iam:ListInstanceProfilesForRole
-
iam:ListRoles
-
iam:ListUsers
-
iam:PassRole
-
iam:PutRolePolicy
-
iam:RemoveRoleFromInstanceProfile
-
iam:SimulatePrincipalPolicy
-
iam:TagRole
Required Route53 permissions for installation
-
route53:ChangeResourceRecordSets
-
route53:ChangeTagsForResource
-
route53:GetChange
-
route53:GetHostedZone
-
route53:CreateHostedZone
-
route53:ListHostedZones
-
route53:ListHostedZonesByName
-
route53:ListResourceRecordSets
-
route53:ListTagsForResource
-
route53:UpdateHostedZoneComment
Required S3 permissions for installation
-
s3:CreateBucket
-
s3:DeleteBucket
-
s3:GetAccelerateConfiguration
-
s3:GetBucketCors
-
s3:GetBucketLocation
-
s3:GetBucketLogging
-
s3:GetBucketObjectLockConfiguration
-
s3:GetBucketReplication
-
s3:GetBucketRequestPayment
-
s3:GetBucketTagging
-
s3:GetBucketVersioning
-
s3:GetBucketWebsite
-
s3:GetEncryptionConfiguration
-
s3:GetLifecycleConfiguration
-
s3:GetReplicationConfiguration
-
s3:ListBucket
-
s3:PutBucketAcl
-
s3:PutBucketTagging
-
s3:PutEncryptionConfiguration
S3 permissions that cluster Operators require
-
s3:PutObject
-
s3:PutObjectAcl
-
s3:PutObjectTagging
-
s3:GetObject
-
s3:GetObjectAcl
-
s3:GetObjectTagging
-
s3:GetObjectVersion
-
s3:DeleteObject
All additional permissions that are required to uninstall a cluster
-
autoscaling:DescribeAutoScalingGroups
-
ec2:DeleteDhcpOptions
-
ec2:DeleteInternetGateway
-
ec2:DeleteNatGateway
-
ec2:DeleteNetworkInterface
-
ec2:DeleteRoute
-
ec2:DeleteRouteTable
-
ec2:DeleteSnapshot
-
ec2:DeleteSecurityGroup
-
ec2:DeleteSubnet
-
ec2:DeleteVolume
-
ec2:DeleteVpc
-
ec2:DeleteVpcEndpoints
-
ec2:DeregisterImage
-
ec2:DetachInternetGateway
-
ec2:DisassociateRouteTable
-
ec2:ReleaseAddress
-
elasticloadbalancing:DescribeTargetGroups
-
elasticloadbalancing:DeleteTargetGroup
-
elasticloadbalancing:DeleteLoadBalancer
-
iam:ListInstanceProfiles
-
iam:ListRolePolicies
-
iam:ListUserPolicies
-
route53:DeleteHostedZone
-
tag:GetResources
1.1.4. Creating an IAM user
Each Amazon Web Services (AWS) account contains a root user account that is based on the email address you used to create the account. This is a highly-privileged account, and it is recommended to use it for only initial account and billing configuration, creating an initial set of users, and securing the account.
Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the following options:
Procedure
-
Specify the IAM user name and select
Programmatic access
. Attach the
AdministratorAccess
policy to ensure that the account has sufficient permission to create the cluster. This policy provides the cluster with the ability to grant credentials to each OpenShift Container Platform component. The cluster grants the components only the credentials that they require.NoteWhile it is possible to create a policy that grants the all of the required AWS permissions and attach it to the user, this is not the preferred option. The cluster will not have the ability to grant additional credentials to individual components, so the same credentials are used by all components.
- Optional: Add metadata to the user by attaching tags.
-
Confirm that the user name that you specified is granted the
AdministratorAccess
policy. Record the access key ID and secret access key values. You must use these values when you configure your local machine to run the installation program.
ImportantYou cannot use a temporary session token that you generated while using a multi-factor authentication device to authenticate to AWS when you deploy a cluster. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials.
1.1.5. Supported AWS regions
You can deploy an OpenShift Container Platform cluster to the following regions:
- ap-northeast-1 (Tokyo)
- ap-northeast-2 (Seoul)
- ap-south-1 (Mumbai)
- ap-southeast-1 (Singapore)
- ap-southeast-2 (Sydney)
- ca-central-1 (Central)
- eu-central-1 (Frankfurt)
- eu-west-1 (Ireland)
- eu-west-2 (London)
- eu-west-3 (Paris)
- sa-east-1 (São Paulo)
- us-east-1 (N. Virginia)
- us-east-2 (Ohio)
- us-west-1 (N. California)
- us-west-2 (Oregon)
Next steps
- Install an OpenShift Container Platform cluster. You can install a customized cluster or quickly install a cluster with default options.
1.2. Installing a cluster quickly on AWS
In OpenShift Container Platform version 4.1, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options.
Prerequisites
- Review details about the OpenShift Container Platform installation and update processes.
Configure an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you must configure it to access Red Hat Insights.
1.2.1. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.1, Telemetry is the component that provides metrics about cluster health and the success of updates. To perform subscription management, including legally entitling your purchase from Red Hat, you must use the Telemetry service and access the Red Hat OpenShift Cluster Manager page.
Because there is no disconnected subscription management, you cannot both opt out of sending data back to Red Hat and entitle your purchase. Support for disconnected subscription management might be added in future releases of OpenShift Container Platform
Your machines must have direct internet access to install the cluster.
You must have internet access to:
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site to download the installation program
- Access Quay.io to obtain the packages that are required to install your cluster
- Obtain the packages that are required to perform cluster updates
- Access Red Hat’s software as a service page to perform subscription management
1.2.2. Generating an SSH private key and adding it to the agent
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your ssh-agent
process uses to the installer.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installer. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.
1.2.3. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You must install the cluster from a computer that uses Linux or macOS.
- You need 300 MB of local disk space to download the installation program.
Procedure
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf <installation_program>.tar.gz
-
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file or copy it to your clipboard. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
1.2.4. Deploy the cluster
You can install OpenShift Container Platform on a compatible cloud.
You can run the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Run the installation program:
$ ./openshift-install create cluster --dir=<installation_directory> \ 1 --log-level info 2
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
Optional: Remove or disable the
AdministratorAccess
policy from the IAM account that you used to install the cluster.
1.2.5. Installing the OpenShift Command-line Interface
You can download and install the OpenShift Command-line Interface (CLI), commonly known as oc
.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.1. You must download and install the new version of oc
.
Procedure
- From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate to the page for your installation type and click Download Command-line Tools.
From the site that is displayed, download the compressed file for your operating system.
NoteYou can install
oc
on Linux, Windows, or macOS.- Extract the compressed file and place it in a directory that is on your PATH.
1.2.6. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 $ oc whoami system:admin
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Next steps
- Customize your cluster.
- If necessary, you can opt out of telemetry.
1.3. Installing a cluster on AWS with customizations
In OpenShift Container Platform version 4.1, you can install a customized cluster on infrastructure that the installation program provisions on Amazon Web Services (AWS). To customize the installation, you modify some parameters in the install-config.yaml
file before you install the cluster.
Prerequisites
- Review details about the OpenShift Container Platform installation and update processes.
Configure an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you must configure it to access Red Hat Insights.
1.3.1. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.1, Telemetry is the component that provides metrics about cluster health and the success of updates. To perform subscription management, including legally entitling your purchase from Red Hat, you must use the Telemetry service and access the Red Hat OpenShift Cluster Manager page.
Because there is no disconnected subscription management, you cannot both opt out of sending data back to Red Hat and entitle your purchase. Support for disconnected subscription management might be added in future releases of OpenShift Container Platform
Your machines must have direct internet access to install the cluster.
You must have internet access to:
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site to download the installation program
- Access Quay.io to obtain the packages that are required to install your cluster
- Obtain the packages that are required to perform cluster updates
- Access Red Hat’s software as a service page to perform subscription management
1.3.2. Generating an SSH private key and adding it to the agent
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your ssh-agent
process uses to the installer.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installer. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.
1.3.3. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You must install the cluster from a computer that uses Linux or macOS.
- You need 300 MB of local disk space to download the installation program.
Procedure
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf <installation_program>.tar.gz
-
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file or copy it to your clipboard. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
1.3.4. Creating the installation configuration file
You can customize your installation of OpenShift Container Platform on a compatible cloud.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create the
install-config.yaml
file.Run the following command:
$ ./openshift-install create install-config --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
-
Modify the
install-config.yaml
file. You can find more information about the available parameters in the Installation configuration parameters section and in the Go documentation. Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
1.3.4.1. Installation configuration parameters
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your Amazon Web Services (AWS) account and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
You cannot modify these parameters after installation.
Parameter | Description | Values |
---|---|---|
|
The base domain of your cloud provider. This value is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
The cloud provider to host the control plane machines. This parameter value must match the |
|
|
The cloud provider to host the worker machines. This parameter value must match the |
|
| The name of your cluster. |
A string that contains uppercase or lowercase letters, such as |
| The region to deploy your cluster in. |
A valid AWS region, such as |
| The pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site. You use this pull secret to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. |
{ "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"you@example.com" }, "quay.io":{ "auth":"b3Blb=", "email":"you@example.com" } } } |
Parameter | Description | Values |
---|---|---|
| The SSH key to use to access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your |
A valid, local public SSH key that you added to the |
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
| The size in GiB of the root volume. |
Integer, for example |
| The instance type of the root volume. |
Valid AWS EBS instance type, such as |
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
| The availability zones where the installation program creates machines for the compute MachinePool. |
A list of valid AWS availability zones, such as |
| The AWS region that the installation program creates compute resources in. |
Valid AWS region, such as |
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
| The availability zones where the installation program creates machines for the control plane MachinePool. |
A list of valid AWS availability zones, such as |
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
| The number of control plane machines to provision. |
A positive integer greater than or equal to |
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
1.3.4.2. Sample customized install-config.yaml
file for AWS
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program and modify it.
apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 type: m5.xlarge 5 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 8 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 9 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 10 userTags: adminContact: jdoe costCenter: 7536 pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12
- 1 9 10 11
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlane
section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. - 4 5
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultanous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlarge
orm5.2xlarge
, for your machines if you disable simultaneous multithreading. - 8
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1
and setiops
to2000
. - 12
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.
1.3.5. Deploy the cluster
You can install OpenShift Container Platform on a compatible cloud.
You can run the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Run the installation program:
$ ./openshift-install create cluster --dir=<installation_directory> \ 1 --log-level info 2
- 1
- For
<installation_directory>
, specify the location of your customized./install-config.yaml
file. - 2
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates. - To view different installation details, specify
warn
,debug
, orerror
instead ofinfo
.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
Optional: Remove or disable the
AdministratorAccess
policy from the IAM account that you used to install the cluster.
1.3.6. Installing the OpenShift Command-line Interface
You can download and install the OpenShift Command-line Interface (CLI), commonly known as oc
.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.1. You must download and install the new version of oc
.
Procedure
- From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate to the page for your installation type and click Download Command-line Tools.
From the site that is displayed, download the compressed file for your operating system.
NoteYou can install
oc
on Linux, Windows, or macOS.- Extract the compressed file and place it in a directory that is on your PATH.
1.3.7. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 $ oc whoami system:admin
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Next steps
- Customize your cluster.
- If necessary, you can opt out of telemetry.
1.4. Installing a cluster on AWS with network customizations
In OpenShift Container Platform version 4.1, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
Prerequisites
- Review details about the OpenShift Container Platform installation and update processes.
Configure an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you must configure it to access Red Hat Insights.
1.4.1. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.1, Telemetry is the component that provides metrics about cluster health and the success of updates. To perform subscription management, including legally entitling your purchase from Red Hat, you must use the Telemetry service and access the Red Hat OpenShift Cluster Manager page.
Because there is no disconnected subscription management, you cannot both opt out of sending data back to Red Hat and entitle your purchase. Support for disconnected subscription management might be added in future releases of OpenShift Container Platform
Your machines must have direct internet access to install the cluster.
You must have internet access to:
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site to download the installation program
- Access Quay.io to obtain the packages that are required to install your cluster
- Obtain the packages that are required to perform cluster updates
- Access Red Hat’s software as a service page to perform subscription management
1.4.2. Generating an SSH private key and adding it to the agent
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your ssh-agent
process uses to the installer.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installer. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.
1.4.3. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You must install the cluster from a computer that uses Linux or macOS.
- You need 300 MB of local disk space to download the installation program.
Procedure
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf <installation_program>.tar.gz
-
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file or copy it to your clipboard. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
1.4.4. Creating the installation configuration file
You can customize your installation of OpenShift Container Platform on a compatible cloud.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create the
install-config.yaml
file.Run the following command:
$ ./openshift-install create install-config --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
-
Modify the
install-config.yaml
file. You can find more information about the available parameters in the Installation configuration parameters section and in the Go documentation. Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
1.4.4.1. Installation configuration parameters
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your Amazon Web Services (AWS) account and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
You cannot modify these parameters after installation.
Parameter | Description | Values |
---|---|---|
|
The base domain of your cloud provider. This value is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
The cloud provider to host the control plane machines. This parameter value must match the |
|
|
The cloud provider to host the worker machines. This parameter value must match the |
|
| The name of your cluster. |
A string that contains uppercase or lowercase letters, such as |
| The region to deploy your cluster in. |
A valid AWS region, such as |
| The pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site. You use this pull secret to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. |
{ "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"you@example.com" }, "quay.io":{ "auth":"b3Blb=", "email":"you@example.com" } } } |
Parameter | Description | Values |
---|---|---|
| The SSH key to use to access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your |
A valid, local public SSH key that you added to the |
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
| The size in GiB of the root volume. |
Integer, for example |
| The instance type of the root volume. |
Valid AWS EBS instance type, such as |
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
| The availability zones where the installation program creates machines for the compute MachinePool. |
A list of valid AWS availability zones, such as |
| The AWS region that the installation program creates compute resources in. |
Valid AWS region, such as |
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
| The availability zones where the installation program creates machines for the control plane MachinePool. |
A list of valid AWS availability zones, such as |
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
| The number of control plane machines to provision. |
A positive integer greater than or equal to |
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
1.4.4.2. Network configuration parameters
You can modify your cluster network configuration parameters in the install-config.yaml
configuration file. The following table describes the parameters.
You cannot modify these parameters after installation.
Parameter | Description | Values |
---|---|---|
|
The network plug-in to deploy. |
|
|
A block of IP addresses from which Pod IP addresses are allocated. The |
An IP address allocation in CIDR format. The default value is |
|
The subnet prefix length to assign to each individual node. For example, if |
A subnet prefix. The default value is |
|
A block of IP addresses for services. |
An IP address allocation in CIDR format. The default value is |
| A block of IP addresses used by the OpenShift Container Platform installation program while installing the cluster. The address block must not overlap with any other network block. |
An IP address allocation in CIDR format. The default value is |
1.4.4.3. Sample customized install-config.yaml
file for AWS
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program and modify it.
apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 type: m5.xlarge 5 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 8 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 9 networking: networking: 10 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 11 userTags: adminContact: jdoe costCenter: 7536 pullSecret: '{"auths": ...}' 12 sshKey: ssh-ed25519 AAAA... 13
- 1 9 11 12
- Required. The installation program prompts you for this value.
- 2 6 10
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlane
section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. - 4 5
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultanous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlarge
orm5.2xlarge
, for your machines if you disable simultaneous multithreading. - 8
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1
and setiops
to2000
. - 13
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.
1.4.5. Modifying advanced network configuration parameters
You can modify the advanced network configuration parameters only before you install the cluster. Advanced configuration customization lets you integrate your cluster into your existing network environment by specifying an MTU or VXLAN port, by allowing customization of kube-proxy settings, and by specifying a different mode
for the openshiftSDNConfig
parameter.
Modifying the OpenShift Container Platform manifest files directly is not supported.
Prerequisites
-
Generate the
install-config.yaml
file and complete any modifications to it.
Procedure
Use the following command to create manifests:
$ ./openshift-install create manifests --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the name of the directory that contains theinstall-config.yaml
file for your cluster.
Create a file that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:$ touch <installation_directory>/manifests/cluster-network-03-config.yml 1
- 1
- For
<installation_directory>
, specify the directory name that contains themanifests/
directory for your cluster.
After creating the file, three network configuration files are in the
manifests/
directory, as shown:$ ls <installation_directory>/manifests/cluster-network-* cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml
Open the
cluster-network-03-config.yml
file in an editor and enter a CR that describes the Operator configuration you want:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: 1 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789
- 1
- The parameters for the
spec
field are only an example. Specify your configuration for the Network Operator in the CR.
The Network Operator provides default values for the parameters in the CR, so you must specify only the parameters that you want to change in the
Network.operator.openshift.io
CR.-
Save the
cluster-network-03-config.yml
file and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
1.4.6. Cluster Network Operator custom resource (CR)
The cluster network configuration in the Network.operator.openshift.io
custom resource (CR) stores the configuration settings for the Cluster Network Operator (CNO).
The following CR displays the default configuration for the CNO and explains both the parameters you can configure and valid parameter values:
Cluster Network Operator CR
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: type: OpenShiftSDN 3 openshiftSDNConfig: 4 mode: NetworkPolicy 5 mtu: 1450 6 vxlanPort: 4789 7 kubeProxyConfig: 8 iptablesSyncPeriod: 30s 9 proxyArguments: iptables-min-sync-period: 10 - 30s
- 1 2 3
- Specified in the
install-config.yaml
file. - 4
- Specify only if you want to override part of the OpenShift Container Platform SDN configuration.
- 5
- Configures the isolation mode for
OpenShiftSDN
. The allowed values areMultitenant
,Subnet
, orNetworkPolicy
. The default value isNetworkPolicy
. - 6
- MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value.
- 7
- The port to use for all VXLAN packets. The default value is
4789
. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for VXLAN, since both SDNs use the same default VXLAN port number.On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port
9000
and port9999
. - 8
- The parameters for this object specify the
kube-proxy
configuration. If you do not specify the parameter values, the Network Operator applies the displayed default parameter values. - 9
- The refresh period for
iptables
rules. The default value is30s
. Valid suffixes includes
,m
, andh
and are described in the Go time package documentation. - 10
- The minimum duration before refreshing
iptables
rules. This parameter ensures that the refresh does not happen too frequently. Valid suffixes includes
,m
, andh
and are described in the Go time package
1.4.7. Deploy the cluster
You can install OpenShift Container Platform on a compatible cloud.
You can run the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Run the installation program:
$ ./openshift-install create cluster --dir=<installation_directory> \ 1 --log-level info 2
- 1
- For
<installation_directory>
, specify the location of your customized./install-config.yaml
file. - 2
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates. - To view different installation details, specify
warn
,debug
, orerror
instead ofinfo
.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
Optional: Remove or disable the
AdministratorAccess
policy from the IAM account that you used to install the cluster.
1.4.8. Installing the OpenShift Command-line Interface
You can download and install the OpenShift Command-line Interface (CLI), commonly known as oc
.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.1. You must download and install the new version of oc
.
Procedure
- From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate to the page for your installation type and click Download Command-line Tools.
From the site that is displayed, download the compressed file for your operating system.
NoteYou can install
oc
on Linux, Windows, or macOS.- Extract the compressed file and place it in a directory that is on your PATH.
1.4.9. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 $ oc whoami system:admin
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Next steps
- Customize your cluster.
- If necessary, you can opt out of telemetry.
1.5. Uninstalling a cluster on AWS
You can remove a cluster that you deployed to Amazon Web Services (AWS).
1.5.1. Removing a cluster from AWS
You can remove a cluster that you installed on Amazon Web Services (AWS).
Prerequisites
- Have a copy of the installation program that you used to deploy the cluster.
- Have the files that the installation program generated when you created your cluster.
Procedure
From the computer that you used to install the cluster, run the following command:
$ ./openshift-install destroy cluster \ --dir=<installation_directory> --log-level=info 1 2
NoteYou must specify the directory that contains the cluster definition files for your cluster. The installation program requires the
metadata.json
file in this directory to delete the cluster.-
Optional: Delete the
<installation_directory>
directory and the OpenShift Container Platform installation program.