이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. Configuring Red Hat High Availability clusters on AWS
This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes. You have a number of options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.
This chapter includes prerequisite procedures for setting up your environment for AWS. Once you have set up your environment, you can create and configure EC2 instances.
This chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on AWS. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing AWS network resource agents.
This chapter refers to the Amazon documentation in a number of places. For many procedures, see the referenced Amazon documentation for more information.
Prerequisites
- You need to install the AWS command line interface (CLI). For more information on installing AWS CLI, see Installing the AWS CLI.
- Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto AWS with full support from Red Hat.
Additional resources
4.1. Creating the AWS Access Key and AWS Secret Access Key 링크 복사링크가 클립보드에 복사되었습니다!
You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.
Complete the following steps to create these keys.
Prerequisites
Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information.
Procedure
- Launch the AWS Console.
- Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
- Click Users.
- Select the user to open the Summary screen.
- Click the Security credentials tab.
- Click Create access key.
-
Download the
.csv
file (or save both keys). You need to enter these keys when creating the fencing device.
4.2. Installing the HA packages and agents 링크 복사링크가 클립보드에 복사되었습니다!
Complete the following steps on all nodes to install the HA packages and agents.
Procedure
Enter the following command to remove the AWS Red Hat Update Infrastructure (RHUI) client. Because you are going to use a Red Hat Cloud Access subscription, you should not use AWS RHUI in addition to your subscription.
sudo -i yum -y remove rh-amazon-rhui-client*
$ sudo -i # yum -y remove rh-amazon-rhui-client*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the VM with Red Hat.
subscription-manager register --auto-attach
# subscription-manager register --auto-attach
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all repositories.
subscription-manager repos --disable=*
# subscription-manager repos --disable=*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the RHEL 7 Server and RHEL 7 Server HA repositories.
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages.
yum update -y
# yum update -y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot if the kernel is updated.
reboot
# reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install pcs, pacemaker, fence agent, and resource agent.
yum -y install pcs pacemaker fence-agents-aws resource-agents
# yum -y install pcs pacemaker fence-agents-aws resource-agents
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The user
hacluster
was created during thepcs
andpacemaker
installation in the previous step. Create a password forhacluster
on all cluster nodes. Use the same password for all nodes.passwd hacluster
# passwd hacluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
high availability
service to the RHEL Firewall iffirewalld.service
is enabled.firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
pcs
service and enable it to start on boot.systemctl enable pcsd.service --now
# systemctl enable pcsd.service --now
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification step
Ensure the pcs
service is running.
systemctl is-active pcsd.service
# systemctl is-active pcsd.service
4.3. Creating a cluster 링크 복사링크가 클립보드에 복사되었습니다!
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster
. Specify the name of each node in the cluster.pcs host auth _hostname1_ _hostname2_ _hostname3_
# pcs host auth _hostname1_ _hostname2_ _hostname3_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster.
pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_
# pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Enable the cluster.
pcs cluster enable --all
# pcs cluster enable --all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the cluster.
pcs cluster start --all
# pcs cluster start --all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Creating a fencing device 링크 복사링크가 클립보드에 복사되었습니다!
Complete the following steps to configure fencing.
Procedure
Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.
echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
[root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a fence device. Use the
pcmk_host_map
command to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key you previously set up in Creating the AWS Access Key and AWS Secret Access Key.pcs stonith create cluster_fence fence_aws access_key=access-key secret_key=_secret-access-key_ region=_region_ pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3"
# pcs stonith create cluster_fence fence_aws access_key=access-key secret_key=_secret-access-key_ region=_region_ pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs stonith create clusterfence fence_aws access_key=AKIAI*******6MRMJA secret_key=a75EYIG4RVL3h*******K7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
[root@ip-10-0-0-48 ~]# pcs stonith create clusterfence fence_aws access_key=AKIAI*******6MRMJA secret_key=a75EYIG4RVL3h*******K7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Test the fencing agent for one of the other nodes.
pcs stonith fence _awsnodename_
# pcs stonith fence _awsnodename_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs stonith fence ip-10-0-0-58
[root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fenced
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify that the node is fenced.
watch pcs status
# watch pcs status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Installing the AWS CLI on cluster nodes 링크 복사링크가 클립보드에 복사되었습니다!
Previously, you installed the AWS CLI on your host system. You now need to install the AWS CLI on cluster nodes before you configure the network resource agents.
Complete the following procedure on each cluster node.
Prerequisites
You must have created an AWS Access Key and AWS Secret Access Key. For more information, see Creating the AWS Access Key and AWS Secret Access Key.
Procedure
- Perform the procedure Installing the AWS CLI.
Enter the following command to verify that the AWS CLI is configured properly. The instance IDs and instance names should display.
Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Installing network resource agents 링크 복사링크가 클립보드에 복사되었습니다!
For HA operations to work, the cluster uses AWS networking resource agents to enable failover functionality. If a node does not respond to a heartbeat check in a set time, the node is fenced and operations fail over to an additional node in the cluster. Network resource agents need to be configured for this to work.
Add the two resources to the same group to enforce order
and colocation
constraints.
Create a secondary private IP resource and virtual IP resource
Complete the following procedure to add a secondary private IP address and create a virtual IP. You can complete this procedure from any node in the cluster.
Procedure
Enter the following command to view the
AWS Secondary Private IP Address
resource agent (awsvip) description. This shows the options and default operations for this agent.pcs resource describe awsvip
# pcs resource describe awsvip
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to create the Secondary Private IP address using an unused private IP address in the
VPC CIDR
block.pcs resource create privip awsvip secondary_private_ip=_Unused-IP-Address_ --group _group-name_
# pcs resource create privip awsvip secondary_private_ip=_Unused-IP-Address_ --group _group-name_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
[root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet.
pcs resource create vip IPaddr2 ip=_secondary-private-IP_ --group _group-name_
# pcs resource create vip IPaddr2 ip=_secondary-private-IP_ --group _group-name_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification step
Enter the pcs status
command to verify that the resources are running.
pcs status
# pcs status
Example:
Create an elastic IP address
An elastic IP address is a public IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node.
Note that this is different from the virtual IP resource created earlier. The elastic IP address is used for public-facing Internet connections instead of subnet connections.
-
Add the two resources to the same group that was previously created to enforce
order
andcolocation
constraints. Enter the following AWS CLI command to create an elastic IP address.
aws ec2 allocate-address --domain vpc --output text
[root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to view the AWS Secondary Elastic IP Address resource agent (awseip) description. This shows the options and default operations for this agent.
pcs resource describe awseip
# pcs resource describe awseip
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Secondary Elastic IP address resource using the allocated IP address created in Step 1.
pcs resource create elastic awseip elastic_ip=_Elastic-IP-Address_allocation_id=_Elastic-IP-Association-ID_ --group networking-group
# pcs resource create elastic awseip elastic_ip=_Elastic-IP-Address_allocation_id=_Elastic-IP-Association-ID_ --group networking-group
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group
# pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification step
Enter the pcs status
command to verify that the resource is running.
pcs status
# pcs status
Example:
Test the elastic IP address
Enter the following commands to verify the virtual IP (awsvip) and elastic IP (awseip) resources are working.
Procedure
Launch an SSH session from your local workstation to the elastic IP address previously created.
ssh -l ec2-user -i ~/.ssh/<KeyName>.pem elastic-IP
$ ssh -l ec2-user -i ~/.ssh/<KeyName>.pem elastic-IP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
$ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the host you connected to via SSH is the host associated with the elastic resource created.