Chapter 15. Installing a cluster on AWS with compute nodes on AWS Wavelength Zones
You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Wavelength Zones by setting the zone names in the edge compute pool of the install-config.yaml
file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Wavelength Zone subnets.
AWS Wavelength Zones is an infrastructure that AWS configured for mobile edge computing (MEC) applications.
A Wavelength Zone embeds AWS compute and storage services within the 5G network of a communication service provider (CSP). By placing application servers in a Wavelength Zone, the application traffic from your 5G devices can stay in the 5G network. The application traffic of the device reaches the target server directly, making latency a non-issue.
Additional resources
- See Wavelength Zones in the AWS documentation.
15.1. Infrastructure prerequisites
- You reviewed details about OpenShift Container Platform installation and update processes.
- You are familiar with Selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
WarningIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation.
- If you use a firewall, you configured it to allow the sites that your cluster must access.
- You noted the region and supported AWS Wavelength Zone locations to create the network resources in.
- You read AWS Wavelength features in the AWS documentation.
- You read the Quotas and considerations for Wavelength Zones in the AWS documentation.
You added permissions for creating network resources that support AWS Wavelength Zones to the Identity and Access Management (IAM) user or role. For example:
Example of an additional IAM policy that attached
ec2:ModifyAvailabilityZoneGroup
,ec2:CreateCarrierGateway
, andec2:DeleteCarrierGateway
permissions to a user or role{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteCarrierGateway", "ec2:CreateCarrierGateway" ], "Resource": "*" }, { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] }
15.2. About AWS Wavelength Zones and edge compute pool
Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Wavelength Zones environment.
15.2.1. Cluster limitations in AWS Wavelength Zones
Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Wavelength Zone.
The following list details limitations when deploying a cluster in a pre-configured AWS zone:
-
The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is
1300
. This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. - Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported.
-
For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS)
gp3
type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with thegp2
EBS volume. Thegp2-csi
StorageClass
parameter must be set when creating workloads on zone nodes.
If you want the installation program to automatically create Wavelength Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. The following note details some of these limitations. For other limitations, ensure that you read the "Quotas and considerations for Wavelength Zones" document that Red Hat provides in the "Infrastructure prerequisites" section.
The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster:
- When the installation program creates private subnets in AWS Wavelength Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region.
- If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Wavelength Zones subnets in an OpenShift Container Platform cluster.
15.2.2. About edge compute pools
Edge compute nodes are tainted compute nodes that run in AWS Wavelength Zones locations.
When deploying a cluster that uses Wavelength Zones, consider the following points:
- Amazon EC2 instances in the Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones.
- The latency is lower between the applications running in AWS Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Wavelength Zones and Availability Zones.
Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes
.
The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing.
For more information, see How AWS Wavelength work in the AWS documentation.
OpenShift Container Platform 4.12 introduced a new compute pool, edge, that is designed for use in remote zones. The edge compute pool configuration is common between AWS Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Wavelength Zones resources, the default instance type can vary from the traditional compute pool.
The default Elastic Block Store (EBS) for Wavelength Zones locations is gp2
, which differs from the non-edge compute pool. The instance type used for each Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone.
The edge compute pool creates new labels that developers can use to deploy applications onto AWS Wavelength Zones nodes. The new labels are:
-
node-role.kubernetes.io/edge=''
-
machine.openshift.io/zone-type=wavelength-zone
-
machine.openshift.io/zone-group=$ZONE_GROUP_NAME
By default, the machine sets for the edge compute pool define the taint of NoSchedule
to prevent other workloads from spreading on Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification.
15.3. Installation prerequisites
Before you install a cluster in an AWS Wavelength Zones environment, you must configure your infrastructure so that it can adopt Wavelength Zone capabilities.
15.3.1. Opting in to an AWS Wavelength Zones
If you plan to create subnets in AWS Wavelength Zones, you must opt in to each zone group separately.
Prerequisites
- You have installed the AWS CLI.
- You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster.
- You have attached a permissive IAM policy to a user or role account that opts in to the zone group.
Procedure
List the zones that are available in your AWS Region by running the following command:
Example command for listing available AWS Wavelength Zones in an AWS Region
$ aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones
Depending on the AWS Region, the list of available zones might be long. The command returns the following fields:
ZoneName
- The name of the Wavelength Zones.
GroupName
- The group that comprises the zone. To opt in to the Region, save the name.
Status
-
The status of the Wavelength Zones group. If the status is
not-opted-in
, you must opt in theGroupName
as described in the next step.
Opt in to the zone group on your AWS account by running the following command:
$ aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \1 --opt-in-status opted-in
- 1
- Replace
<value_of_GroupName>
with the name of the group of the Wavelength Zones where you want to create subnets. As an example for Wavelength Zones, specifyus-east-1-wl1
to use the zoneus-east-1-wl1-nyc-wlz-1
(US East New York).
15.3.2. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.15, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
15.3.3. Obtaining an AWS Marketplace image
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes.
Prerequisites
- You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Procedure
- Complete the OpenShift Container Platform subscription from the AWS Marketplace.
Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the
install-config.yaml
file with this value before deploying the cluster.Sample
install-config.yaml
file with AWS Marketplace compute nodesapiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}'
15.3.4. Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc
.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant drop-down list.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.15 Linux Clients entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.15 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
Verification
After you install the OpenShift CLI, it is available using the
oc
command:C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.15 macOS Clients entry and save the file.
NoteFor macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
Verification
Verify your installation by using an
oc
command:$ oc <command>
15.3.5. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Procedure
- Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
15.3.6. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
15.4. Preparing for the installation
Before you extend nodes to Wavelength Zones, you must prepare certain resources for the cluster installation environment.
15.4.1. Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
Compute | RHCOS, RHEL 8.6 and later [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
- x86-64 architecture requires x86-64-v2 ISA
- ARM64 architecture requires ARMv8.0-A ISA
- IBM Power architecture requires Power 9 ISA
- s390x architecture requires z14 ISA
For more information, see RHEL Architectures.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
15.4.2. Tested instance types for AWS
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Wavelength Zones.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation".
Example 15.1. Machine types based on 64-bit x86 architecture for AWS Wavelength Zones
-
r5.*
-
t3.*
Additional resources
- See AWS Wavelength features in the AWS documentation.
15.4.3. Creating the installation configuration file
Generate and customize the installation configuration file that the installation program needs to deploy your cluster.
Prerequisites
- You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the
install-config.yaml
file manually.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.- Select aws as the platform to target.
If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
NoteThe AWS access key ID and secret access key are stored in
~/.aws/credentials
in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.- Select the AWS Region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from Red Hat OpenShift Cluster Manager.
Optional: Back up the
install-config.yaml
file.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
15.4.4. Examples of installation configuration files with edge compute pools
The following examples show install-config.yaml
files that contain an edge machine pool configuration.
Configuration that uses an edge pool with a custom instance type
apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA...
Instance types differ between locations. To verify availability in the Wavelength Zones in which the cluster runs, see the AWS documentation.
Configuration that uses an edge pool with custom security groups
apiVersion: v1
baseDomain: devcluster.openshift.com
metadata:
name: ipi-edgezone
compute:
- name: edge
platform:
aws:
additionalSecurityGroupIDs:
- sg-1 1
- sg-2
platform:
aws:
region: us-west-2
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
- 1
- Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the
sg
prefix.
15.5. Cluster installation options for an AWS Wavelength Zones environment
Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Wavelength Zones:
- Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster.
-
Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Wavelength Zones subnets to the
install-config.yaml
file.
Next steps
Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Wavelength Zones environment:
15.6. Install a cluster quickly in AWS Wavelength Zones
For OpenShift Container Platform 4.15, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Wavelength Zones locations. By using this installation route, the installation program automatically creates network resources and Wavelength Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml
file before you deploy the cluster.
15.6.1. Modifying an installation configuration file to use AWS Wavelength Zones
Modify an install-config.yaml
file to include AWS Wavelength Zones.
Prerequisites
- You have configured an AWS account.
-
You added your AWS keys and AWS Region to your local AWS profile by running
aws configure
. - You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster.
- You opted in to the Wavelength Zones group for each zone.
-
You created an
install-config.yaml
file by using the procedure "Creating the installation configuration file".
Procedure
Modify the
install-config.yaml
file by specifying Wavelength Zones names in theplatform.aws.zones
property of the edge compute pool.# ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #...
Example of a configuration to install a cluster in the
us-west-2
AWS Region that extends edge nodes to Wavelength Zones inLos Angeles
andLas Vegas
locationsapiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #...
- Deploy your cluster.
Additional resources
Next steps
15.7. Installing a cluster in an existing VPC that has Wavelength Zone subnets
You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml
file before you install the cluster.
Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Wavelength Zones.
You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company’s policies.
The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources.
15.7.1. Creating a VPC in AWS
You can create a Virtual Private Cloud (VPC), and subnets for all Wavelength Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Wavelength Zones subnets not included at initial deployment.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and AWS Region to your local AWS profile by running
aws configure
. - You opted in to the AWS Wavelength Zones on your AWS account.
Procedure
Create a JSON file that contains the parameter values that the CloudFormation template requires:
[ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ]
- Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> \1 --template-body file://<template>.yaml \2 --parameters file://<parameters>.json 3
- 1
<name>
is the name for the CloudFormation stack, such ascluster-vpc
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>
is the relative path and the name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
Confirm that the template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster.VpcId
The ID of your VPC.
PublicSubnetIds
The IDs of the new public subnets.
PrivateSubnetIds
The IDs of the new private subnets.
PublicRouteTableId
The ID of the new public route table ID.
15.7.2. CloudFormation template for the VPC
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.
Example 15.2. CloudFormation template for the VPC
AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ]
15.7.3. Creating a VPC carrier gateway
To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes.
To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components:
- A carrier gateway that associates to the existing VPC.
- A carrier route table that lists route entries.
- A subnet that associates to the carrier route table.
Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone.
The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location:
- Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network.
- Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic.
- Authorizes inbound traffic from a carrier network that is located in a specific location.
- Authorizes outbound traffic to a carrier network and the internet.
No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway.
You can use the provided CloudFormation template to create a stack of the following AWS resources:
- One carrier gateway that associates to the VPC ID in the template.
-
One public route table for the Wavelength Zone named as
<ClusterName>-public-carrier
. - Default IPv4 route entry in the new route table that targets the carrier gateway.
- VPC gateway endpoint for an AWS Simple Storage Service (S3).
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure
.
Procedure
- Go to the next section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC:
$ aws cloudformation create-stack --stack-name <stack_name> \1 --region ${CLUSTER_REGION} \ --template-body file://<template>.yaml \2 --parameters \// ParameterKey=VpcId,ParameterValue="${VpcId}" \3 ParameterKey=ClusterName,ParameterValue="${ClusterName}" 4
- 1
<stack_name>
is the name for the CloudFormation stack, such asclusterName-vpc-carrier-gw
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path and the name of the CloudFormation template YAML file that you saved.- 3
<VpcId>
is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS".- 4
<ClusterName>
is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in themetadata.name
section of theinstall-config.yaml
configuration file.
Example output
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f
Verification
Confirm that the CloudFormation template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <stack_name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster.PublicRouteTableId
The ID of the Route Table in the Carrier infrastructure.
Additional resources
- See Amazon S3 in the AWS documentation.
15.7.4. CloudFormation template for the VPC Carrier Gateway
You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure.
Example 15.3. CloudFormation template for VPC Carrier Gateway
AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})$ ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: "AWS::EC2::CarrierGateway" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "cagw"]] PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public-carrier"]] PublicRoute: Type: "AWS::EC2::Route" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable
15.7.5. Creating subnets in Wavelength Zones
Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to.
You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure
. - You opted in to the Wavelength Zones group.
Procedure
- Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC:
$ aws cloudformation create-stack --stack-name <stack_name> \ 1 --region ${CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="${VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="${CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="${ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="${ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="${SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="${ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="${SUBNET_CIDR_PVT}" 9
- 1
<stack_name>
is the name for the CloudFormation stack, such ascluster-wl-<wavelength_zone_shortname>
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path and the name of the CloudFormation template YAML file that you saved.- 3
${VPC_ID}
is the VPC ID, which is the valueVpcID
in the output of the CloudFormation template for the VPC.- 4
${ZONE_NAME}
is the value of Wavelength Zones name to create the subnets.- 5
${CLUSTER_NAME}
is the value of ClusterName to be used as a prefix of the new AWS resource names.- 6
${ROUTE_TABLE_PUB}
is the PublicRouteTableId extracted from the output of the VPC’s carrier gateway CloudFormation stack.- 7
${SUBNET_CIDR_PUB}
is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR blockVpcCidr
.- 8
${ROUTE_TABLE_PVT}
is the PrivateRouteTableId extracted from the output of the VPC’s CloudFormation stack.- 9
${SUBNET_CIDR_PVT}
is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR blockVpcCidr
.
Example output
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f
Verification
Confirm that the template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <stack_name>
After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster.PublicSubnetId
The IDs of the public subnet created by the CloudFormation stack.
PrivateSubnetId
The IDs of the private subnet created by the CloudFormation stack.
15.7.6. CloudFormation template for the VPC subnet
You can use the following CloudFormation template to deploy the private and public subnets in a zone on Wavelength Zones infrastructure.
Example 15.4. CloudFormation template for VPC subnets
AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})$ ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]]
15.7.7. Modifying an installation configuration file to use AWS Wavelength Zones subnets
Modify your install-config.yaml
file to include Wavelength Zones subnets.
Prerequisites
- You created subnets by using the procedure "Creating subnets in Wavelength Zones".
-
You created an
install-config.yaml
file by using the procedure "Creating the installation configuration file".
Procedure
Modify the
install-config.yaml
configuration file by specifying Wavelength Zones subnets in theplatform.aws.subnets
parameter.Example installation configuration file with Wavelength Zones subnets
# ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1 # ...
- 1
- List of subnet IDs created in the zones: Availability and Wavelength Zones.
Additional resources
- For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console.
- For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation.
Next steps
15.8. Optional: Assign public IP addresses to edge compute nodes
If your workload requires deploying the edge compute nodes in public subnets on Wavelength Zones infrastructure, you can configure the machine set manifests when installing a cluster.
AWS Wavelength Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone.
The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure.
By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets.
You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to.
Procedure
Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the
openshift
andmanifests
directory level.$ ./openshift-install create manifests --dir <installation_directory>
Edit the machine set manifest that the installation program generates for the Wavelength Zones, so that the manifest gets deployed in public subnets. Specify
true
for thespec.template.spec.providerSpec.value.publicIP
parameter.Example machine set manifest configuration for installing a cluster quickly in Wavelength Zones
spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - ${INFRA_ID}-public-${ZONE_NAME}
Example machine set manifest configuration for installing a cluster in an existing VPC that has Wavelength Zones subnets
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true
15.9. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
Optional: Remove or disable the
AdministratorAccess
policy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccess
policy are required only during installation.
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
15.10. Verifying the status of the deployed cluster
Verify that your OpenShift Container Platform successfully deployed on AWS Wavelength Zones.
15.10.1. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
15.10.2. Logging in to the cluster by using the web console
The kubeadmin
user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin
user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadmin
user from thekubeadmin-password
file on the installation host:$ cat <installation_directory>/auth/kubeadmin-password
NoteAlternatively, you can obtain the
kubeadmin
password from the<installation_directory>/.openshift_install.log
log file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'
NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.log
log file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadmin
user.
Additional resources
- For more information about accessing and understanding the OpenShift Container Platform web console, see Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
15.10.3. Verifying nodes that were created with edge compute pool
After you install a cluster that uses AWS Wavelength Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation.
To check the machine sets created from the subnet you added to the
install-config.yaml
file, run the following command:$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m
To check the machines that were created from the machine sets, run the following command:
$ oc get machines -n openshift-machine-api
Example output
NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1-wbclh Running c5d.2xlarge us-east-1 us-east-1-wl1-nyc-wlz-1 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h
To check nodes with edge roles, run the following command:
$ oc get nodes -l node-role.kubernetes.io/edge
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f
15.11. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
- For more information about the Telemetry service, see About remote health monitoring.
Next steps
- Validating an installation.
- If necessary, you can opt out of remote health.