Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Creating a Red Hat OpenShift Service on AWS cluster with Terraform
3.1. Creating a default Red Hat OpenShift Service on AWS cluster with Terraform Copia collegamentoCollegamento copiato negli appunti!
Create a Red Hat OpenShift Service on AWS cluster with a Terraform cluster template that is configured with the default cluster options.
The following process for creating a cluster uses a Terraform configuration that prepares a Red Hat OpenShift Service on AWS cluster with these resources:
-
An OpenID Connect (OIDC) provider with a managed
oidc-configconfiguration - Prerequisite IAM Operator roles with associated AWS Managed Red Hat OpenShift Service on AWS Policies
- IAM account roles with associated AWS Managed Red Hat OpenShift Service on AWS Policies
- All other AWS resources required to create a Red Hat OpenShift Service on AWS cluster
3.1.1. Overview of Terraform Copia collegamentoCollegamento copiato negli appunti!
Terraform is an infrastructure-as-code tool that provides a way to configure your resources once and replicate those resources as desired. Terraform accomplishes the creation tasks by using declarative language. You declare what you want the final state of the infrastructure resource to be, and Terraform creates these resources to your specifications.
3.1.2. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
To use the Red Hat Cloud Services provider inside your Terraform configuration, you must meet the following prerequisites:
- You have installed the ROSA CLI tool.
- You have your offline Red Hat OpenShift Cluster Manager token.
- You have installed Terraform version 1.4.6 or newer.
You have created your AWS account-wide IAM roles.
The specific account-wide IAM roles and policies provide the STS permissions required for Red Hat OpenShift Service on AWS support, installation, control plane, and compute functionality. This includes account-wide Operator policies. See the Additional resources for more information on the AWS account roles.
- You have an AWS account and associated credentials that allow you to create resources. The credentials are configured for the AWS provider. See the Authentication and Configuration section in AWS Terraform provider documentation.
You have, at minimum, the following permissions in your AWS IAM role policy that is operating Terraform. Check for these permissions in the AWS console.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:GetPolicyVersion", "iam:DeletePolicyVersion", "iam:CreatePolicyVersion", "iam:UpdateAssumeRolePolicy", "secretsmanager:DescribeSecret", "iam:ListRoleTags", "secretsmanager:PutSecretValue", "secretsmanager:CreateSecret", "iam:TagRole", "secretsmanager:DeleteSecret", "iam:UpdateOpenIDConnectProviderThumbprint", "iam:DeletePolicy", "iam:CreateRole", "iam:AttachRolePolicy", "iam:ListInstanceProfilesForRole", "secretsmanager:GetSecretValue", "iam:DetachRolePolicy", "iam:ListAttachedRolePolicies", "iam:ListPolicyTags", "iam:ListRolePolicies", "iam:DeleteOpenIDConnectProvider", "iam:DeleteInstanceProfile", "iam:GetRole", "iam:GetPolicy", "iam:ListEntitiesForPolicy", "iam:DeleteRole", "iam:TagPolicy", "iam:CreateOpenIDConnectProvider", "iam:CreatePolicy", "secretsmanager:GetResourcePolicy", "iam:ListPolicyVersions", "iam:UpdateRole", "iam:GetOpenIDConnectProvider", "iam:TagOpenIDConnectProvider", "secretsmanager:TagResource", "sts:AssumeRoleWithWebIdentity", "iam:ListRoles" ], "Resource": [ "arn:aws:secretsmanager:*:<ACCOUNT_ID>:secret:*", "arn:aws:iam::<ACCOUNT_ID>:instance-profile/*", "arn:aws:iam::<ACCOUNT_ID>:role/*", "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/*", "arn:aws:iam::<ACCOUNT_ID>:policy/*" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": "*" } ] }
3.1.3. Considerations when using Terraform Copia collegamentoCollegamento copiato negli appunti!
In general, using Terraform to manage cloud resources should be done with the expectation that any changes should be done using the Terraform methodology. Use caution when using tools outside of Terraform, such as the AWS console or Red Hat console, to modify cloud resources created by Terraform. Using tools outside Terraform to manage cloud resources that are already managed by Terraform introduces configuration drift from your declared Terraform configuration.
For example, if you upgrade your Terraform-created cluster by using the Red Hat Hybrid Cloud Console, you need to reconcile your Terraform state before applying any forthcoming configuration changes. For more information, see Manage resources in Terraform state in the HashiCorp Developer documentation.
3.1.4. Overview of the default cluster specifications Copia collegamentoCollegamento copiato negli appunti!
You can quickly create a Red Hat OpenShift Service on AWS cluster by using the default installation options.
The following summary describes the default cluster specifications.
| Component | Default specifications |
|---|---|
| Accounts and roles |
|
| Cluster settings |
|
| Compute node machine pool |
|
| Networking configuration |
|
| Classless Inter-Domain Routing (CIDR) ranges |
|
| Cluster roles and policies |
|
| Storage |
|
| Cluster update strategy |
|
3.1.5. Creating a default Red Hat OpenShift Service on AWS cluster using Terraform Copia collegamentoCollegamento copiato negli appunti!
The cluster creation process outlined below shows how to use Terraform to create your account-wide IAM roles and a Red Hat OpenShift Service on AWS cluster with a managed OIDC configuration.
3.1.5.1. Preparing your environment for Terraform Copia collegamentoCollegamento copiato negli appunti!
Before you can create your Red Hat OpenShift Service on AWS cluster by using Terraform, you need to export your offline Red Hat OpenShift Cluster Manager token.
Procedure
Optional: Because the Terraform files get created in your current directory during this procedure, you can create a new directory to store these files and navigate into it by running the following command:
$ mkdir terraform-cluster && cd terraform-cluster- Grant permissions to your account by using an offline Red Hat OpenShift Cluster Manager token.
Copy your offline token, and set the token as an environmental variable by running the following command:
$ export RHCS_TOKEN=<your_offline_token>NoteThis environmental variable resets at the end of each session, such as restarting your machine or closing the terminal.
Verification
After you export your token, verify the value by running the following command:
$ echo $RHCS_TOKEN
3.1.5.2. Creating your Terraform files locally Copia collegamentoCollegamento copiato negli appunti!
After you set up your offline Red Hat OpenShift Cluster Manager token, you need to create the Terraform files locally to build your cluster. You can create these files by using the following code templates.
Procedure
Create the
main.tffile by running the following command:$ cat<<-EOF>main.tf # # Copyright (c) 2023 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # terraform { required_providers { aws = { source = "hashicorp/aws" version = ">= 4.21.0" } rhcs = { version = ">= 1.6.3" source = "terraform-redhat/rhcs" } } } # Export token using the RHCS_TOKEN environment variable provider "rhcs" {} provider "aws" { region = var.aws_region ignore_tags { key_prefixes = ["kubernetes.io/"] } default_tags { tags = var.default_aws_tags } } data "aws_availability_zones" "available" {} locals { # Extract availability zone names for the specified region, limit it to 3 if multi az or 1 if single region_azs = var.multi_az ? slice([for zone in data.aws_availability_zones.available.names : format("%s", zone)], 0, 3) : slice([for zone in data.aws_availability_zones.available.names : format("%s", zone)], 0, 1) } resource "random_string" "random_name" { length = 6 special = false upper = false } locals { worker_node_replicas = var.multi_az ? 3 : 2 # If cluster_name is not null, use that, otherwise generate a random cluster name cluster_name = coalesce(var.cluster_name, "rosa-\${random_string.random_name.result}") } # The network validator requires an additional 60 seconds to validate Terraform clusters. resource "time_sleep" "wait_60_seconds" { count = var.create_vpc ? 1 : 0 depends_on = [module.vpc] create_duration = "60s" } module "rosa-hcp" { source = "terraform-redhat/rosa-hcp/rhcs" version = "1.6.3" cluster_name = local.cluster_name openshift_version = var.openshift_version account_role_prefix = local.cluster_name operator_role_prefix = local.cluster_name replicas = local.worker_node_replicas aws_availability_zones = local.region_azs create_oidc = true private = var.private_cluster aws_subnet_ids = var.create_vpc ? var.private_cluster ? module.vpc[0].private_subnets : concat(module.vpc[0].public_subnets, module.vpc[0].private_subnets) : var.aws_subnet_ids create_account_roles = true create_operator_roles = true # Optional: Configure a cluster administrator user # # Option 1: Default cluster-admin user # Create an administrator user (cluster-admin) and automatically # generate a password by uncommenting the following parameter: # create_admin_user = true # Generated administrator credentials are displayed in terminal output. # # Option 2: Specify administrator username and password # Create an administrator user and define your own password # by uncommenting and editing the values of the following parameters: # admin_credentials_username = <username> # admin_credentials_password = <password> depends_on = [time_sleep.wait_60_seconds] } EOFIf you want to create an administrator user during cluster creation, uncomment the appropriate parameters in the
Optional: Configure a cluster administrator usersection and edit their values.Create the
variables.tffile by running the following command:NoteCopy and edit this file before running the command to build your cluster.
$ cat<<-EOF>variables.tf # # Copyright (c) 2023 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # variable "openshift_version" { type = string default = "4.14.20" description = "Desired version of OpenShift for the cluster, for example '4.14.20'. If version is greater than the currently running version, an upgrade will be scheduled." } variable "create_vpc" { type = bool description = "If you would like to create a new VPC, set this value to 'true'. If you do not want to create a new VPC, set this value to 'false'." } # ROSA Cluster info variable "cluster_name" { default = null type = string description = "The name of the ROSA cluster to create" } variable "additional_tags" { default = { Terraform = "true" Environment = "dev" } description = "Additional AWS resource tags" type = map(string) } variable "multi_az" { type = bool description = "Multi AZ Cluster for High Availability" default = true } variable "worker_node_replicas" { default = 3 description = "Number of worker nodes to provision. Single zone clusters need at least 2 nodes, multizone clusters need at least 3 nodes" type = number } variable "aws_subnet_ids" { type = list(any) description = "A list of either the public or public + private subnet IDs to use for the cluster blocks to use for the cluster" default = ["subnet-01234567890abcdef", "subnet-01234567890abcdef", "subnet-01234567890abcdef"] } variable "private_cluster" { type = bool description = "If you want to create a private cluster, set this value to 'true'. If you want a publicly available cluster, set this value to 'false'." } #VPC Info variable "vpc_name" { type = string description = "VPC Name" default = "tf-qs-vpc" } variable "vpc_cidr_block" { type = string description = "value of the CIDR block to use for the VPC" default = "10.0.0.0/16" } variable "private_subnet_cidrs" { type = list(any) description = "The CIDR blocks to use for the private subnets" default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] } variable "public_subnet_cidrs" { type = list(any) description = "The CIDR blocks to use for the public subnets" default = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"] } variable "single_nat_gateway" { type = bool description = "Single NAT or per NAT for subnet" default = false } #AWS Info variable "aws_region" { type = string default = "us-east-2" } variable "default_aws_tags" { type = map(string) description = "Default tags for AWS" default = {} } EOFCreate the
vpc.tffile by running the following command:$ cat<<-EOF>vpc.tf # # Copyright (c) 2023 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.1.2" count = var.create_vpc ? 1 : 0 name = var.vpc_name cidr = var.vpc_cidr_block azs = local.region_azs private_subnets = var.multi_az ? var.private_subnet_cidrs : [var.private_subnet_cidrs[0]] public_subnets = var.multi_az ? var.public_subnet_cidrs : [var.public_subnet_cidrs[0]] enable_nat_gateway = true single_nat_gateway = var.single_nat_gateway enable_dns_hostnames = true enable_dns_support = true tags = var.additional_tags } EOFYou are ready to initiate Terraform.
3.1.5.3. Using Terraform to create your Red Hat OpenShift Service on AWS cluster Copia collegamentoCollegamento copiato negli appunti!
After you create the Terraform files, you must initiate Terraform to provide all of the required dependencies. Then apply the Terraform plan.
Do not modify Terraform state files. For more information, see Considerations when using Terraform
Procedure
Set up Terraform to create your resources based on your Terraform files, run the following command:
$ terraform initOptional: Verify that the Terraform you copied is correct by running the following command:
$ terraform validateExample output
Success! The configuration is valid.Create your cluster with Terraform by running the following command:
$ terraform applyThe Terraform interface asks two questions to create your cluster, similar to the following:
var.create_vpc If you would like to create a new VPC, set this value to 'true'. If you do not want to create a new VPC, set this value to 'false'. Enter a value: var.private_cluster If you want to create a private cluster, set this value to 'true'. If you want a publicly available cluster, set this value to 'false'. Enter a value:Enter
yesto proceed ornoto cancel when the Terraform interface lists the resources to be created or changed and prompts for confirmation:Plan: 63 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.If you enter
yes, your Terraform plan starts, creating your AWS account roles, Operator roles, and your Red Hat OpenShift Service on AWS cluster.
Verification
Verify that your cluster was created by running the following command:
$ rosa list clustersThis example shows a cluster in the
readystate:ID NAME STATE TOPOLOGY 27c3snjsupa9obua74ba8se5kcj11269 rosa-tf-demo ready Hosted CPVerify that your account roles were created by running the following command:
$ rosa list account-rolesThis example shows the account roles that were created:
I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed ROSA-demo-Installer-Role Installer arn:aws:iam::<ID>:role/ROSA-demo-Installer-Role 4.14 No ROSA-demo-Support-Role Support arn:aws:iam::<ID>:role/ROSA-demo-Support-Role 4.14 No ROSA-demo-Worker-Role Worker arn:aws:iam::<ID>:role/ROSA-demo-Worker-Role 4.14 NoVerify that your Operator roles were created by running the following command:
$ rosa list operator-rolesThis example shows the Terraform-created Operator roles:
I: Fetching operator roles ROLE PREFIX AMOUNT IN BUNDLE rosa-demo 8
3.1.5.4. Configuring an htpasswd identity provider with Terraform Copia collegamentoCollegamento copiato negli appunti!
After creating your cluster with Terraform, you can permit users access to your cluster by using an htpasswd identity provider (IDP) with the Terraform tool.
Prerequisites
- You have installed and configured the latest version of the ROSA CLI.
- You have installed and configured the latest version of Terraform.
Procedure
Create the
htpasswd_idp.tffile by running one of the following commands:Option 1: To create a user with a generated, randomized password, run:
$ cat<<-EOF>htpasswd_idp.tf module "htpasswd_idp" { source = "terraform-redhat/rosa-hcp/rhcs//modules/idp" version = "1.6.2" cluster_id = "2odpb9p344hnkfvpkluo00qmgkika78l" name = "htpasswd-idp-tf-1" idp_type = "htpasswd" htpasswd_idp_users = [{ username = "pej-user-d1", password = random_password.password.result }] } resource "aws_secretsmanager_secret" "idp_password" { name = "idp-password-secret" description = "Any description here" } resource "random_password" "password" { length = 16 lower = true special = true override_special = "!#$%&*()-_=+[]{}<>:?" } # If you need to output the password, mark it as sensitive to hide from CLI logs output "password_output" { value = random_password.password.result sensitive = true } # This section sends your credentials to your AWS Secrets Manager to enable you to log in to your cluster. resource "aws_secretsmanager_secret_version" "idp_password_val" { secret_id = aws_secretsmanager_secret.idp_password.id secret_string = random_password.password.result } EOFYou must replace the
<cluster_id>placeholder with the 32-digit ID for your cluster. To find that value, runrosa list clusters | awk '{print $1}'. You also must replace the<user_name>placeholder with the username you want to create. The randomized password is then stored in your AWS Secrets manager to be used when logging in to the cluster.Run the following command to view your password after setting it:
$ terraform output password_outputThe CLI returns your generated password in plain text.
Option 2: To specify your passwords when creating a user, run:
$ cat<<-EOF>htpasswd_idp.tf module "htpasswd_idp" { source = "terraform-redhat/rosa-hcp/rhcs//modules/idp" version = "1.6.2" cluster_id = "<cluster_id>" name = "htpasswd-idp" idp_type = "htpasswd" htpasswd_idp_users = [{ username="<user_name>",password="<password>"}] } EOFYou must replace the
<cluster_id>placeholder with the 32-digit ID for your cluster. To find that value, runrosa list clusters | awk '{print $1}'. You also must replace the<user_name>placeholder with the username you want to create as well as a password for the<password>placeholder.
Run the following command to set up Terraform to create your resources based on your Terraform files:
$ terraform initVerify that the Terraform you copied is correct by running the following command:
$ terraform validateExample output
Success! The configuration is valid.Create your cluster with Terraform by running the following command:
$ terraform applyEnter
yesto proceed ornoto cancel when the Terraform interface lists the resources to be created or changed and prompts for confirmation:Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yesYou see a confirmation that your IDP has been created.
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.NoteIf you used the randomized password template, then the generated password is stored in your AWS Secrets manager.
3.1.5.5. Deleting your Red Hat OpenShift Service on AWS cluster with Terraform Copia collegamentoCollegamento copiato negli appunti!
Use the terraform destroy command to remove all resources you create with the terraform apply command.
Keep your Terraform .tf files unchanged before destroying your resources. These variables are matched to resources to delete.
Procedure
In the directory where you ran the
terraform applycommand to create your cluster, run the following command to delete the cluster:$ terraform destroyThe Terraform interface prompts you for two variables. These should match the answers you provided when creating a cluster:
var.create_vpc If you would like to create a new VPC, set this value to 'true.' If you do not want to create a new VPC, set this value to 'false.' Enter a value: var.private_cluster If you want to create a private cluster, set this value to 'true.' If you want a publicly available cluster, set this value to 'false.' Enter a value:Enter
yesto start the role and cluster deletion:Example output
Plan: 0 to add, 0 to change, 63 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes
Verification
Verify that your cluster was destroyed by running the following command:
$ rosa list clustersExample output showing no cluster
I: No clusters availableVerify that the account roles were destroyed by running the following command:
$ rosa list account-rolesExample output showing no Terraform-created account roles
I: Fetching account roles I: No account roles availableVerify that the Operator roles were destroyed by running the following command:
$ rosa list operator-rolesExample output showing no Terraform-created Operator roles
I: Fetching operator roles I: No operator roles available