Chapter 2. Installing on user-provisioned AWS
2.1. Installing a cluster on AWS using CloudFormation templates
In OpenShift Container Platform version 4.1, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide.
One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.
Prerequisites
- Review details about the OpenShift Container Platform installation and update processes.
Configure an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
- If you use a firewall, you must configure it to access Red Hat Insights.
2.1.1. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.1, Telemetry is the component that provides metrics about cluster health and the success of updates. To perform subscription management, including legally entitling your purchase from Red Hat, you must use the Telemetry service and access the Red Hat OpenShift Cluster Manager page.
Because there is no disconnected subscription management, you cannot both opt out of sending data back to Red Hat and entitle your purchase. Support for disconnected subscription management might be added in future releases of OpenShift Container Platform
Your machines must have direct internet access to install the cluster.
You must have internet access to:
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site to download the installation program
- Access Quay.io to obtain the packages that are required to install your cluster
- Obtain the packages that are required to perform cluster updates
- Access Red Hat’s software as a service page to perform subscription management
2.1.2. Required AWS infrastructure components
To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure.
For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page.
You can use the provided CloudFormation templates to create this infrastructure, you can manually create the components, or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.
2.1.2.1. Cluster machines
You need AWS::EC2::Instance
objects for the following machines:
- A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys.
- At least three control plane machines. The control plane machines are not governed by a MachineSet.
- Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a MachineSet.
You can use the following instance types for the cluster machines:
If m4
instance types are not available in your region, such as with eu-west-3
, use m5
types instead.
Instance type | Bootstrap | Control plane | Compute |
---|---|---|---|
| x | ||
| x | ||
| x | x | |
| x | x | |
| x | x | |
| x | x | |
| x | x | |
| x | x | |
| x | ||
| x | ||
| x | x | |
| x | x | |
| x | x | |
| x | ||
| x | x | |
| x | x | |
| x | x | |
| x | x | |
| x | x |
You might be able to use other instance types that meet the specifications of these instance types.
2.1.2.2. Certificate signing requests management
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
2.1.2.3. Other infrastructure components
- A VPC
- DNS entries
- Load balancers (classic or network) and listeners
- A Route53 zone
- Security groups
- IAM roles
- S3 buckets
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
Component | AWS type | Description | |
---|---|---|---|
VPC |
| You must provide a public VPC for the cluster to use. The VPC requires an endpoint that references the route tables for each subnet. | |
Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. Each public subnet must also be attached to the route and have a NAT gateway and EIP address. | |
Network access control |
| You must allow the VPC to access the following ports: | |
Port | Reason | ||
| Inbound HTTP traffic | ||
| Inbound HTTPS traffic | ||
| Inbound SSH traffic | ||
| Inbound ephemeral traffic | ||
| Outbound ephemeral traffic | ||
Private subnets |
| Your VPC can have a private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. |
Required DNS and load balancing components
Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster’s infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain>
must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain>
must point to the internal load balancer.
The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the master nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster.
Component | AWS type | Description |
---|---|---|
DNS |
| The hosted zone for your internal DNS. |
etcd record sets |
| The registration records for etcd for your control plane machines. |
Public load balancer |
| The load balancer for your public subnets. |
External API server record |
| Alias records for the external API server. |
External listener |
| A listener on port 6443 for the external load balancer. |
External target group |
| The target group for the external load balancer. |
Private load balancer |
| The load balancer for your private subnets. |
Internal API server record |
| Alias records for the internal API server. |
Internal listener |
| A listener on port 22623 for the internal load balancer. |
Internal target group |
| The target group for the Internal load balancer. |
Internal listener |
| A listener on port 6443 for the internal load balancer. |
Internal target group |
| The target group for the Internal load balancer. |
Security groups
The control plane and worker machines require access to the following ports:
Group | Type | IP Protocol | Port range |
---|---|---|---|
MasterSecurityGroup |
|
|
|
|
| ||
|
| ||
|
| ||
WorkerSecurityGroup |
|
|
|
|
| ||
BootstrapSecurityGroup |
|
|
|
|
|
Control plane Ingress
The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress
resource.
Ingress group | Description | IP protocol | Port range |
---|---|---|---|
| etcd |
|
|
| Vxlan packets |
|
|
| Vxlan packets |
|
|
| Internal cluster communication and Kubernetes proxy metrics |
|
|
| Internal cluster communication |
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
| Kubernetes Ingress services |
|
|
| Kubernetes Ingress services |
|
|
Worker Ingress
The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress
resource.
Ingress group | Description | IP protocol | Port range |
---|---|---|---|
| Vxlan packets |
|
|
| Vxlan packets |
|
|
| Internal cluster communication |
|
|
| Internal cluster communication |
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
| Kubernetes Ingress services |
|
|
| Kubernetes Ingress services |
|
|
Roles and instance profiles
You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines permission the following AWS::IAM::Role
objects and provide a AWS::IAM::InstanceProfile
for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions.
Role | Effect | Action | Resource |
---|---|---|---|
Master |
|
|
|
|
|
| |
|
|
| |
|
|
| |
Worker |
|
|
|
Bootstrap |
|
|
|
|
|
| |
|
|
|
2.1.2.4. Required AWS permissions
When you attach the AdministratorAccess
policy to the IAM user that you create, you grant that user all of the required permissions. To deploy an OpenShift Container Platform cluster, the IAM user requires the following permissions:
Required EC2 permissions for installation
-
ec2:AllocateAddress
-
ec2:AssociateAddress
-
ec2:AssociateDhcpOptions
-
ec2:AssociateRouteTable
-
ec2:AttachInternetGateway
-
ec2:AuthorizeSecurityGroupEgress
-
ec2:AuthorizeSecurityGroupIngress
-
ec2:CopyImage
-
ec2:CreateDhcpOptions
-
ec2:CreateInternetGateway
-
ec2:CreateNatGateway
-
ec2:CreateRoute
-
ec2:CreateRouteTable
-
ec2:CreateSecurityGroup
-
ec2:CreateSubnet
-
ec2:CreateTags
-
ec2:CreateVpc
-
ec2:CreateVpcEndpoint
-
ec2:CreateVolume
-
ec2:DescribeAccountAttributes
-
ec2:DescribeAddresses
-
ec2:DescribeAvailabilityZones
-
ec2:DescribeDhcpOptions
-
ec2:DescribeImages
-
ec2:DescribeInstanceAttribute
-
ec2:DescribeInstanceCreditSpecifications
-
ec2:DescribeInstances
-
ec2:DescribeInternetGateways
-
ec2:DescribeKeyPairs
-
ec2:DescribeNatGateways
-
ec2:DescribeNetworkAcls
-
ec2:DescribePrefixLists
-
ec2:DescribeRegions
-
ec2:DescribeRouteTables
-
ec2:DescribeSecurityGroups
-
ec2:DescribeSubnets
-
ec2:DescribeTags
-
ec2:DescribeVpcEndpoints
-
ec2:DescribeVpcs
-
ec2:DescribeVpcAttribute
-
ec2:DescribeVolumes
-
ec2:DescribeVpcClassicLink
-
ec2:DescribeVpcClassicLinkDnsSupport
-
ec2:ModifyInstanceAttribute
-
ec2:ModifySubnetAttribute
-
ec2:ModifyVpcAttribute
-
ec2:RevokeSecurityGroupEgress
-
ec2:RunInstances
-
ec2:TerminateInstances
-
ec2:RevokeSecurityGroupIngress
-
ec2:ReplaceRouteTableAssociation
-
ec2:DescribeNetworkInterfaces
-
ec2:ModifyNetworkInterfaceAttribute
Required Elasticloadbalancing permissions for installation
-
elasticloadbalancing:AddTags
-
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer
-
elasticloadbalancing:AttachLoadBalancerToSubnets
-
elasticloadbalancing:CreateListener
-
elasticloadbalancing:CreateLoadBalancer
-
elasticloadbalancing:CreateLoadBalancerListeners
-
elasticloadbalancing:CreateTargetGroup
-
elasticloadbalancing:ConfigureHealthCheck
-
elasticloadbalancing:DeregisterInstancesFromLoadBalancer
-
elasticloadbalancing:DeregisterTargets
-
elasticloadbalancing:DescribeInstanceHealth
-
elasticloadbalancing:DescribeListeners
-
elasticloadbalancing:DescribeLoadBalancers
-
elasticloadbalancing:DescribeLoadBalancerAttributes
-
elasticloadbalancing:DescribeTags
-
elasticloadbalancing:DescribeTargetGroupAttributes
-
elasticloadbalancing:DescribeTargetHealth
-
elasticloadbalancing:ModifyLoadBalancerAttributes
-
elasticloadbalancing:ModifyTargetGroup
-
elasticloadbalancing:ModifyTargetGroupAttributes
-
elasticloadbalancing:RegisterTargets
-
elasticloadbalancing:RegisterInstancesWithLoadBalancer
-
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
Required IAM permissions for installation
-
iam:AddRoleToInstanceProfile
-
iam:CreateInstanceProfile
-
iam:CreateRole
-
iam:DeleteInstanceProfile
-
iam:DeleteRole
-
iam:DeleteRolePolicy
-
iam:GetInstanceProfile
-
iam:GetRole
-
iam:GetRolePolicy
-
iam:GetUser
-
iam:ListInstanceProfilesForRole
-
iam:ListRoles
-
iam:ListUsers
-
iam:PassRole
-
iam:PutRolePolicy
-
iam:RemoveRoleFromInstanceProfile
-
iam:SimulatePrincipalPolicy
-
iam:TagRole
Required Route53 permissions for installation
-
route53:ChangeResourceRecordSets
-
route53:ChangeTagsForResource
-
route53:GetChange
-
route53:GetHostedZone
-
route53:CreateHostedZone
-
route53:ListHostedZones
-
route53:ListHostedZonesByName
-
route53:ListResourceRecordSets
-
route53:ListTagsForResource
-
route53:UpdateHostedZoneComment
Required S3 permissions for installation
-
s3:CreateBucket
-
s3:DeleteBucket
-
s3:GetAccelerateConfiguration
-
s3:GetBucketCors
-
s3:GetBucketLocation
-
s3:GetBucketLogging
-
s3:GetBucketObjectLockConfiguration
-
s3:GetBucketReplication
-
s3:GetBucketRequestPayment
-
s3:GetBucketTagging
-
s3:GetBucketVersioning
-
s3:GetBucketWebsite
-
s3:GetEncryptionConfiguration
-
s3:GetLifecycleConfiguration
-
s3:GetReplicationConfiguration
-
s3:ListBucket
-
s3:PutBucketAcl
-
s3:PutBucketTagging
-
s3:PutEncryptionConfiguration
S3 permissions that cluster Operators require
-
s3:PutObject
-
s3:PutObjectAcl
-
s3:PutObjectTagging
-
s3:GetObject
-
s3:GetObjectAcl
-
s3:GetObjectTagging
-
s3:GetObjectVersion
-
s3:DeleteObject
All additional permissions that are required to uninstall a cluster
-
autoscaling:DescribeAutoScalingGroups
-
ec2:DeleteDhcpOptions
-
ec2:DeleteInternetGateway
-
ec2:DeleteNatGateway
-
ec2:DeleteNetworkInterface
-
ec2:DeleteRoute
-
ec2:DeleteRouteTable
-
ec2:DeleteSnapshot
-
ec2:DeleteSecurityGroup
-
ec2:DeleteSubnet
-
ec2:DeleteVolume
-
ec2:DeleteVpc
-
ec2:DeleteVpcEndpoints
-
ec2:DeregisterImage
-
ec2:DetachInternetGateway
-
ec2:DisassociateRouteTable
-
ec2:ReleaseAddress
-
elasticloadbalancing:DescribeTargetGroups
-
elasticloadbalancing:DeleteTargetGroup
-
elasticloadbalancing:DeleteLoadBalancer
-
iam:ListInstanceProfiles
-
iam:ListRolePolicies
-
iam:ListUserPolicies
-
route53:DeleteHostedZone
-
tag:GetResources
2.1.3. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You must install the cluster from a computer that uses Linux or macOS.
- You need 300 MB of local disk space to download the installation program.
Procedure
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf <installation_program>.tar.gz
-
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file or copy it to your clipboard. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
2.1.4. Generating an SSH private key and adding it to the agent
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your ssh-agent
process uses to the installer.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installer. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.
2.1.5. Creating the installation files for AWS
To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install_config.yaml
file, Kubernetes manifests, and Ignition config files.
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Obtain the
install-config.yaml
file.Run the following command:
$ ./openshift-install create install-config --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your
ssh-agent
process uses to the installation program.- Select AWS as the platform to target.
- If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
Edit the
install-config.yaml
file to set the number of compute, or worker, replicas to0
, as shown in the followingcompute
stanza:compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0
Optional: Back up the
install-config.yaml
file.ImportantThe
install-config.yaml
file is consumed during the next step. If you want to reuse the file, back it up now.Remove the Kubernetes manifest files for the control plane machines. By removing these files, you prevent the cluster from automatically generating control plane machines.
Generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory> 1 WARNING There are no compute nodes specified. The cluster will not fully initialize without compute nodes. INFO Consuming "Install Config" from target directory
- 1
- For
<installation_directory>
, specify the same installation directory.
Because you create your own compute machines later in the installation process, you can safely ignore this warning.
Remove the files that define the control plane machines:
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f openshift/99_openshift-cluster-api_worker-machineset-*
Because you create and manage the worker machines yourself, you do not need to initialize these machines.
Obtain the Ignition config files:
$ ./openshift-install create ignition-configs --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the same installation directory.
The following files are generated in the directory:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
2.1.6. Extracting the infrastructure name
The Ignition configs contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS) tags. The provided CloudFormation templates contain references to this tag, so you must extract it.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Generate the Ignition config files for your cluster.
-
Install the
jq
package.
Procedure
To extract the infrastructure name from the Ignition config file metadata, run the following command:
$ jq -r .infraID /<installation_directory>/metadata.json 1 openshift-vw9j6 2
You need the output of this command to configure the provided CloudFormation templates and can use it in other AWS tags.
2.1.7. Creating a VPC in AWS
You must create a VPC in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. The easiest way to create the VPC is to modify the provided CloudFormation template.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ]
- Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.
Launch the template:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3
- 1
<name>
is the name for the CloudFormation stack, such ascluster-vpc
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path to and name of the CloudFormation parameters JSON file.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:VpcId
The ID of your VPC.
PublicSubnetIds
The IDs of the new public subnets.
PrivateSubnetIds
The IDs of the new private subnets.
2.1.7.1. CloudFormation template for the VPC
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ]
2.1.8. Creating networking and load balancing components in AWS
You must configure networking and load balancing (classic or network) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. The easiest way to create these components is to modify the provided CloudFormation template, which also creates a hosted zone and subnet tags.
You can run the template multiple times within a single VPC.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
- Create and configure a VPC and associated subnets in AWS.
Procedure
Obtain the Hosted Zone ID for the Route53 zone that you specified in the
install-config.yaml
file for your cluster. You can obtain this ID from the AWS console or by running the following command:ImportantYou must enter the command on a single line.
$ aws route53 list-hosted-zones-by-name | jq --arg name "<route53_domain>." \ 1 -r '.HostedZones | .[] | select(.Name=="\($name)") | .Id'
- 1
- For the
<route53_domain>
, specify the Route53 base domain that you used when you generated theinstall-config.yaml
file for the cluster.
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ]
- 1
- A short, representative cluster name to use for host names, etc.
- 2
- Specify the cluster name that you used when you generated the
install-config.yaml
file for the cluster. - 3
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 4
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>
. - 5
- The Route53 public zone ID to register the targets with.
- 6
- Specify the Route53 public zone ID, which as a format similar to
Z21IXYZABCZ2A4
. You can obtain this value from the AWS console. - 7
- The Route53 zone to register the targets with.
- 8
- Specify the Route53 base domain that you used when you generated the
install-config.yaml
file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - 9
- The public subnets that you created for your VPC.
- 10
- Specify the
PublicSubnetIds
value from the output of the CloudFormation template for the VPC. - 11
- The private subnets that you created for your VPC.
- 12
- Specify the
PrivateSubnetIds
value from the output of the CloudFormation template for the VPC. - 13
- The VPC that you created for the cluster.
- 14
- Specify the
VpcId
value from the output of the CloudFormation template for the VPC.
- Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.
Launch the template:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM
- 1
<name>
is the name for the CloudFormation stack, such ascluster-dns
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path to and name of the CloudFormation parameters JSON file.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:PrivateHostedZoneId
Hosted zone ID for the private DNS.
ExternalApiLoadBalancerName
Full name of the external API load balancer.
InternalApiLoadBalancerName
Full name of the internal API load balancer.
ApiServerDnsName
Full host name of the API server.
RegisterNlbIpTargetsLambda
Lambda ARN useful to help register/deregister IP targets for these load balancers.
ExternalApiTargetGroupArn
ARN of external API target group.
InternalApiTargetGroupArn
ARN of internal API target group.
InternalServiceTargetGroupArn
ARN of internal service target group.
2.1.8.1. CloudFormation template for the network and load balancers
You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$ MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$ MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.7" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.7" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the External API load balancer created. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the Internal API load balancer created. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of External API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of Internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of internal service target group. Value: !Ref InternalServiceTargetGroup
2.1.9. Creating security group and roles in AWS
You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. The easiest way to create these components is to modify the provided CloudFormation template.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
- Create and configure a VPC and associated subnets in AWS.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ]
- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>
. - 3
- The CIDR block for the VPC.
- 4
- Specify the CIDR block parameter that you used for the VPC that you defined in the form
x.x.x.x/16-24
. - 5
- The private subnets that you created for your VPC.
- 6
- Specify the
PrivateSubnetIds
value from the output of the CloudFormation template for the VPC. - 7
- The VPC that you created for the cluster.
- 8
- Specify the
VpcId
value from the output of the CloudFormation template for the VPC.
- Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.
Launch the template:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM
- 1
<name>
is the name for the CloudFormation stack, such ascluster-sec
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path to and name of the CloudFormation parameters JSON file.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:MasterSecurityGroupId
Master Security Group ID
WorkerSecurityGroupId
Worker Security Group ID
MasterInstanceProfile
Master IAM Instance Profile
WorkerInstanceProfile
Worker IAM Instance Profile
2.1.9.1. CloudFormation template for security objects
You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$ MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:*" Resource: "*" - Effect: "Allow" Action: "elasticloadbalancing:*" Resource: "*" - Effect: "Allow" Action: "iam:PassRole" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile
2.1.10. RHCOS AMIs for the AWS infrastructure
You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your Amazon Web Services (AWS) zone for your OpenShift Container Platform nodes.
AWS zone | AWS AMI |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2.1.11. Creating the bootstrap node in AWS
You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. The easiest way to create this node is to modify the provided CloudFormation template.
If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
- Create and configure a VPC and associated subnets in AWS.
- Create and configure DNS, load balancers, and listeners in AWS.
- Create control plane and compute roles.
Procedure
Provide a location to serve the
bootstrap.ign
Ignition config file to your cluster. This file is located in your installation directory. One way to do this is to create an S3 bucket in your cluster’s region and upload the Ignition config file to it.ImportantThe provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates.
NoteThe bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.
Create the bucket:
$ aws s3 mb s3://<cluster-name>-infra 1
- 1
<cluster-name>-infra
is the bucket name.
Upload the
bootstrap.ign
Ignition config file to the bucket:$ aws s3 cp bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign
Verify that the file uploaded:
$ aws s3 ls s3://<cluster-name>-infra/ 2019-04-03 16:15:16 314878 bootstrap.ign
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ]
- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>
. - 3
- Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
- 4
- Specify a valid
AWS::EC2::Image::Id
value. - 5
- CIDR block to allow SSH access to the bootstrap node.
- 6
- Specify a CIDR block in the format
x.x.x.x/16-24
. - 7
- The public subnet that is associated with your VPC to launch the bootstrap node into.
- 8
- Specify the
PublicSubnetIds
value from the output of the CloudFormation template for the VPC. - 9
- The master security group ID (for registering temporary rules)
- 10
- Specify the
MasterSecurityGroupId
value from the output of the CloudFormation template for the security group and roles. - 11
- The VPC created resources will belong to.
- 12
- Specify the
VpcId
value from the output of the CloudFormation template for the VPC. - 13
- Location to fetch bootstrap Ignition config file from.
- 14
- Specify the S3 bucket and file name in the form
s3://<bucket_name>/bootstrap.ign
. - 15
- Whether or not to register a network load balancer (NLB).
- 16
- Specify
yes
orno
. If you specifyyes
, you must provide a Lambda Amazon Resource Name (ARN) value. - 17
- The ARN for NLB IP target registration lambda group.
- 18
- Specify the
RegisterNlbIpTargetsLambda
value from the output of the CloudFormation template for DNS and load balancing. - 19
- The ARN for external API load balancer target group.
- 20
- Specify the
ExternalApiTargetGroupArn
value from the output of the CloudFormation template for DNS and load balancing. - 21
- The ARN for internal API load balancer target group.
- 22
- Specify the
InternalApiTargetGroupArn
value from the output of the CloudFormation template for DNS and load balancing. - 23
- The ARN for internal service load balancer target group.
- 24
- Specify the
InternalServiceTargetGroupArn
value from the output of the CloudFormation template for DNS and load balancing.
- Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.
Launch the template:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM
- 1
<name>
is the name for the CloudFormation stack, such ascluster-bootstrap
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path to and name of the CloudFormation parameters JSON file.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:BootstrapInstanceId
The bootstrap Instance ID.
BootstrapPublicIp
The bootstrap node public IP address.
BootstrapPrivateIp
The bootstrap node private IP address.
2.1.11.1. CloudFormation template for the bootstrap machine
You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$ MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: "i3.large" NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"${S3Loc}","verification":{}}},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp
2.1.12. Creating the control plane machines in AWS
You must create the control plane machines in Amazon Web Services (AWS) for your cluster to use. The easiest way to create these nodes is to modify the provided CloudFormation template.
If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
- Create and configure a VPC and associated subnets in AWS.
- Create and configure DNS, load balancers, and listeners in AWS.
- Create control plane and compute roles.
- Create the bootstrap machine.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "m4.xlarge" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ]
- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>
. - 3
- CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines.
- 4
- Specify an
AWS::EC2::Image::Id
value. - 5
- Whether or not to perform DNS etcd registration.
- 6
- Specify
yes
orno
. If you specifyyes
, you must provide Hosted Zone information. - 7
- The Route53 private zone ID to register the etcd targets with.
- 8
- Specify the
PrivateHostedZoneId
value from the output of the CloudFormation template for DNS and load balancing. - 9
- The Route53 zone to register the targets with.
- 10
- Specify
<cluster_name>.<domain_name>
where<domain_name>
is the Route53 base domain that you used when you generatedinstall-config.yaml
file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - 11 13 15
- A subnet, preferably private, to launch the control plane machines on.
- 12 14 16
- Specify a subnet from the
PrivateSubnets
value from the output of the CloudFormation template for DNS and load balancing. - 17
- The master security group ID to associate with master nodes.
- 18
- Specify the
MasterSecurityGroupId
value from the output of the CloudFormation template for the security group and roles. - 19
- The location to fetch control plane Ignition config file from.
- 20
- Specify the generated Ignition config file location,
https://api-int.<cluster_name>.<domain_name>:22623/config/master
. - 21
- The base64 encoded certificate authority string to use.
- 22
- Specify the value from the
master.ign
file that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==
. - 23
- The IAM profile to associate with master nodes.
- 24
- Specify the
MasterInstanceProfile
parameter value from the output of the CloudFormation template for the security group and roles. - 25
- The type of AWS instance to use for the control plane machines.
- 26
- Allowed values:
-
m4.xlarge
-
m4.2xlarge
-
m4.4xlarge
-
m4.8xlarge
-
m4.10xlarge
-
m4.16xlarge
-
c4.2xlarge
-
c4.4xlarge
-
c4.8xlarge
-
r4.xlarge
-
r4.2xlarge
-
r4.4xlarge
-
r4.8xlarge
r4.16xlarge
ImportantIf
m4
instance types are not available in your region, such as witheu-west-3
, specify anm5
type, such asm5.xlarge
, instead.
-
- 27
- Whether or not to register a network load balancer (NLB).
- 28
- Specify
yes
orno
. If you specifyyes
, you must provide a Lambda Amazon Resource Name (ARN) value. - 29
- The ARN for NLB IP target registration lambda group.
- 30
- Specify the
RegisterNlbIpTargetsLambda
value from the output of the CloudFormation template for DNS and load balancing. - 31
- The ARN for external API load balancer target group.
- 32
- Specify the
ExternalApiTargetGroupArn
value from the output of the CloudFormation template for DNS and load balancing. - 33
- The ARN for internal API load balancer target group.
- 34
- Specify the
InternalApiTargetGroupArn
value from the output of the CloudFormation template for DNS and load balancing. - 35
- The ARN for internal service load balancer target group.
- 36
- Specify the
InternalServiceTargetGroupArn
value from the output of the CloudFormation template for DNS and load balancing.
- Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.
-
If you specified an
m5
instance type as the value forMasterInstanceType
, add that instance type to theMasterInstanceType.AllowedValues
parameter in the CloudFormation template. Launch the template:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3
- 1
<name>
is the name for the CloudFormation stack, such ascluster-control-plane
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path to and name of the CloudFormation parameters JSON file.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
2.1.12.1. CloudFormation template for control plane machines
You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$ MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information? Type: String PrivateHostedZoneId: Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4. Type: String PrivateHostedZoneName: Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period. Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m4.xlarge Type: String AllowedValues: - "m4.xlarge" - "m4.2xlarge" - "m4.4xlarge" - "m4.8xlarge" - "m4.10xlarge" - "m4.16xlarge" - "c4.2xlarge" - "c4.4xlarge" - "c4.8xlarge" - "r4.xlarge" - "r4.2xlarge" - "r4.4xlarge" - "r4.8xlarge" - "r4.16xlarge" AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "DNS" Parameters: - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterDNS: default: "Use Provided DNS Automation" AutoRegisterELB: default: "Use Provided ELB Automation" PrivateHostedZoneName: default: "Private Hosted Zone Name" PrivateHostedZoneId: default: "Private Hosted Zone ID" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] DoDns: !Equals ["yes", !Ref AutoRegisterDNS] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp EtcdSrvRecords: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]] ResourceRecords: - !Join [ " ", ["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]], ] - !Join [ " ", ["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]], ] - !Join [ " ", ["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]], ] TTL: 60 Type: SRV Etcd0Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master0.PrivateIp TTL: 60 Type: A Etcd1Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master1.PrivateIp TTL: 60 Type: A Etcd2Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master2.PrivateIp TTL: 60 Type: A Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]
2.1.13. Initializing the bootstrap node on AWS with user-provisioned infrastructure
After you create all of the required infrastructure in Amazon Web Services (AWS), you can install the cluster.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
- Create and configure a VPC and associated subnets in AWS.
- Create and configure DNS, load balancers, and listeners in AWS.
- Create control plane and compute roles.
- Create the bootstrap machine.
- Create the control plane machines.
- If you plan to manually manage the worker machines, create the worker machines.
Procedure
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install wait-for bootstrap-complete --dir=<installation_directory> \ 1 --log-level info 2
If the command exits without a
FATAL
warning, your production control plane has initialized.
2.1.13.1. Creating the worker nodes in AWS
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. The easiest way to manually create these nodes is to modify the provided CloudFormation template.
The CloudFormation template creates a stack that represents one worker machine. You must create a stack for each worker machine.
If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an AWS account.
- Generate the Ignition config files for your cluster.
- Create and configure a VPC and associated subnets in AWS.
- Create and configure DNS, load balancers, and listeners in AWS.
- Create control plane and compute roles.
- Create the bootstrap machine.
- Create the control plane machines.
Procedure
Create a JSON file that contains the parameter values that the CloudFormation template requires:
[ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "m4.large" 16 } ]
- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>
. - 3
- Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
- 4
- Specify an
AWS::EC2::Image::Id
value. - 5
- A subnet, preferably private, to launch the worker nodes on.
- 6
- Specify a subnet from the
PrivateSubnets
value from the output of the CloudFormation template for DNS and load balancing. - 7
- The worker security group ID to associate with worker nodes.
- 8
- Specify the
WorkerSecurityGroupId
value from the output of the CloudFormation template for the security group and roles. - 9
- The location to fetch bootstrap Ignition config file from.
- 10
- Specify the generated Ignition config location,
https://api-int.<cluster_name>.<domain_name>:22623/config/worker
. - 11
- Base64 encoded certificate authority string to use.
- 12
- Specify the value from the
worker.ign
file that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==
. - 13
- The IAM profile to associate with worker nodes.
- 14
- Specify the
WorkerInstanceProfile
parameter value from the output of the CloudFormation template for the security group and roles. - 15
- The type of AWS instance to use for the control plane machines.
- 16
- Allowed values:
-
m4.large
-
m4.xlarge
-
m4.2xlarge
-
m4.4xlarge
-
m4.8xlarge
-
m4.10xlarge
-
m4.16xlarge
-
c4.large
-
c4.xlarge
-
c4.2xlarge
-
c4.4xlarge
-
c4.8xlarge
-
r4.large
-
r4.xlarge
-
r4.2xlarge
-
r4.4xlarge
-
r4.8xlarge
r4.16xlarge
ImportantIf
m4
instance types are not available in your region, such as witheu-west-3
, usem5
types instead.
-
- Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.
-
If you specified an
m5
instance type as the value forWorkerInstanceType
, add that instance type to theWorkerInstanceType.AllowedValues
parameter in the CloudFormation template. Create a worker stack.
Launch the template:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3
- 1
<name>
is the name for the CloudFormation stack, such ascluster-workers
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path to and name of the CloudFormation parameters JSON file.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
Continue to create worker stacks until you have created enough worker Machines for your cluster.
ImportantYou must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.
2.1.13.1.1. CloudFormation template for worker machines
You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$ MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m4.large Type: String AllowedValues: - "m4.large" - "m4.xlarge" - "m4.2xlarge" - "m4.4xlarge" - "m4.8xlarge" - "m4.10xlarge" - "m4.16xlarge" - "c4.large" - "c4.xlarge" - "c4.2xlarge" - "c4.4xlarge" - "c4.8xlarge" - "r4.large" - "r4.xlarge" - "r4.2xlarge" - "r4.4xlarge" - "r4.8xlarge" - "r4.16xlarge" Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp
2.1.14. Installing the OpenShift Command-line Interface
You can download and install the OpenShift Command-line Interface (CLI), commonly known as oc
.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.1. You must download and install the new version of oc
.
Procedure
- From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate to the page for your installation type and click Download Command-line Tools.
From the site that is displayed, download the compressed file for your operating system.
NoteYou can install
oc
on Linux, Windows, or macOS.- Extract the compressed file and place it in a directory that is on your PATH.
2.1.15. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 $ oc whoami system:admin
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
2.1.16. Approving the CSRs for your machines
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
Prerequisites
- You added machines to your cluster.
-
Install the
jq
package.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.13.4+b626c2fe1 master-1 Ready master 63m v1.13.4+b626c2fe1 master-2 Ready master 64m v1.13.4+b626c2fe1 worker-0 NotReady worker 76s v1.13.4+b626c2fe1 worker-1 NotReady worker 70s v1.13.4+b626c2fe1
The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
. You must implement a method of automatically approving the kubelet serving certificate requests.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
If all the CSRs are valid, approve them all by running the following command:
$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
2.1.17. Initial Operator configuration
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Prerequisites
- Your control plane has initialized.
Procedure
Watch the cluster components come online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.1.0 True False False 69s cloud-credential 4.1.0 True False False 12m cluster-autoscaler 4.1.0 True False False 11m console 4.1.0 True False False 46s dns 4.1.0 True False False 11m image-registry 4.1.0 False True False 5m26s ingress 4.1.0 True False False 5m36s kube-apiserver 4.1.0 True False False 8m53s kube-controller-manager 4.1.0 True False False 7m24s kube-scheduler 4.1.0 True False False 12m machine-api 4.1.0 True False False 12m machine-config 4.1.0 True False False 7m36s marketplace 4.1.0 True False False 7m54m monitoring 4.1.0 True False False 7h54s network 4.1.0 True False False 5m9s node-tuning 4.1.0 True False False 11m openshift-apiserver 4.1.0 True False False 11m openshift-controller-manager 4.1.0 True False False 5m943s openshift-samples 4.1.0 True False False 3m55s operator-lifecycle-manager 4.1.0 True False False 11m operator-lifecycle-manager-catalog 4.1.0 True False False 11m service-ca 4.1.0 True False False 11m service-catalog-apiserver 4.1.0 True False False 5m26s service-catalog-controller-manager 4.1.0 True False False 5m25s storage 4.1.0 True False False 5m30s
- Configure the Operators that are not available.
2.1.17.1. Image registry storage configuration
If the image-registry
Operator is not available, you must configure storage for it. Instructions for both configuring a PersistentVolume, which is required for production clusters, and for configuring an empty directory as the storage location, which is available for only non-production clusters, are shown.
2.1.17.1.1. Configuring registry storage for AWS with user-provisioned infrastructure
During installation, your cloud credentials are sufficient to create an S3 bucket and the Registry Operator will automatically configure storage.
If the Registry Operator cannot create an S3 bucket, and automatically configure storage, you can create a S3 bucket and configure storage with the following procedure.
Prerequisites
- A cluster on AWS with user-provisioned infrastructure.
Procedure
Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage.
- Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.
Fill in the storage configuration in
configs.imageregistry.operator.openshift.io/cluster
:$ oc edit configs.imageregistry.operator.openshift.io/cluster storage: s3: bucket: <bucket-name> region: <region-name>
To secure your registry images in AWS, block public access to the S3 bucket.
2.1.17.1.2. Configuring storage for the image registry in non-production clusters
You must configure storage for the image registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
WarningConfigure this option for only non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patch
command fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found
Wait a few minutes and run the command again.
2.1.18. Completing an AWS installation on user-provisioned infrastructure
After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, remove the bootstrap node, and wait for installation to complete.
Prerequisites
- Deploy the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.
-
Install the
oc
CLI and log in.
Procedure
Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:
$ aws cloudformation delete-stack --stack-name <name> 1
- 1
<name>
is the name of your bootstrap stack.
Complete the cluster installation:
$ ./openshift-install --dir=<installation_directory> wait-for install-complete 1 INFO Waiting up to 30m0s for the cluster to initialize...
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Next steps
- Customize your cluster.
- If necessary, you can opt out of telemetry.