Chapter 4. Configuring a Red Hat High Availability cluster on AWS
To create a cluster where RHEL nodes automatically redistribute their workloads if a node failure occurs, use the Red Hat High Availability Add-On. Such high availability (HA) clusters can also be hosted on public cloud platforms, including AWS. Creating RHEL HA clusters on AWS is similar to creating HA clusters in non-cloud environments.
To configure a Red Hat HA cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes, see the following sections. Note that you have several options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS. Before you begin, ensure that you have completed the following prerequisites:
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
4.1. The benefits of using high-availability clusters on public cloud platforms Copy linkLink copied to clipboard!
A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
4.2. Creating the AWS Access Key and AWS Secret Access Key Copy linkLink copied to clipboard!
You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.
Prerequisites
- Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information.
Procedure
- Launch the AWS Console.
- Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
- Click Users.
- Select the user and open the Summary screen.
- Click the Security credentials tab.
- Click Create access key.
-
Download the
.csvfile (or save both keys). You need to enter these keys when creating the fencing device.
4.3. Installing the AWS CLI Copy linkLink copied to clipboard!
Many of the procedures required to manage HA clusters in AWS include using the AWS CLI.
Prerequisites
- You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.
Procedure
Install the AWS command line tools by using the
yumcommand.yum install awscli
# yum install awscliCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
aws --versioncommand to verify that you installed the AWS CLI.aws --version
$ aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the AWS command line client according to your AWS access details.
aws configure
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Creating an HA EC2 instance Copy linkLink copied to clipboard!
Complete the following steps to create the instances that you use as your HA cluster nodes. Note that you have a number of options for obtaining the RHEL images you use for your cluster. See Red Hat Enterprise Linux Image options on AWS for information about image options for AWS.
You can create and upload a custom image that you use for your cluster nodes, or you can use a Gold Image or an on-demand image.
Prerequisites
- You have set up an AWS environment. For more information, see Setting Up with Amazon EC2.
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click on your image and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance may need to have higher capacity.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.
NoteDo not launch into an Auto Scaling Group.
- For Network, select the VPC you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you may need to make additional selections.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.
- Click Next: Add Storage and verify that the default storage is sufficient. You do not need to modify these settings unless your HA application requires other storage options.
Click Next: Add Tags.
NoteTags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.
- Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
- Click Review and Launch and verify your selections.
- Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when Setting up the AWS environment.
- Click Launch Instances.
Click View Instances. You can name the instance(s).
NoteAlternatively, you can launch instances by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
4.5. Configuring the private key Copy linkLink copied to clipboard!
Complete the following configuration tasks to use the private SSH key file (.pem) before it can be used in an SSH session.
Procedure
-
Move the key file from the
Downloadsdirectory to yourHomedirectory or to your~/.ssh directory. Change the permissions of the key file so that only the root user can read it.
chmod 400 KeyName.pem
# chmod 400 KeyName.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Connecting to an EC2 instance Copy linkLink copied to clipboard!
Using the AWS Console on all nodes, you can connect to an EC2 instance.
Procedure
- Launch the AWS Console and select the EC2 instance.
- Click Connect and select A standalone SSH client.
-
From your SSH terminal session, connect to the instance by using the AWS example provided in the pop-up window. Add the correct path to your
KeyName.pemfile if the path is not shown in the example.
4.7. Installing the High Availability packages and agents Copy linkLink copied to clipboard!
On each of the nodes, you need to install the High Availability packages and agents to be able to configure a Red Hat High Availability cluster on AWS.
Procedure
Remove the AWS Red Hat Update Infrastructure (RHUI) client.
sudo -i yum -y remove rh-amazon-rhui-client*
$ sudo -i # yum -y remove rh-amazon-rhui-client*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the VM with Red Hat.
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all repositories.
subscription-manager repos --disable=*
# subscription-manager repos --disable=*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the RHEL 8 Server HA repositories.
subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHEL AWS instance.
yum update -y
# yum update -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Red Hat High Availability Add-On software packages, along with the AWS fencing agent from the High Availability channel.
yum install pcs pacemaker fence-agents-aws
# yum install pcs pacemaker fence-agents-awsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The user
haclusterwas created during thepcsandpacemakerinstallation in the previous step. Create a password forhaclusteron all cluster nodes. Use the same password for all nodes.passwd hacluster
# passwd haclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
high availabilityservice to the RHEL Firewall iffirewalld.serviceis installed.firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
pcsservice and enable it to start on boot.systemctl start pcsd.service systemctl enable pcsd.service
# systemctl start pcsd.service # systemctl enable pcsd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit
/etc/hostsand add RHEL host names and internal IP addresses. For more information, see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes?.
Verification
Ensure the
pcsservice is running.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8. Creating a cluster Copy linkLink copied to clipboard!
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster. In the command, specify the name of each node in the cluster.pcs host auth <hostname1> <hostname2> <hostname3>
# pcs host auth <hostname1> <hostname2> <hostname3>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster.
pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>
# pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enable the cluster.
pcs cluster enable --all
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster EnabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the cluster.
pcs cluster start --all
[root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9. Configuring fencing Copy linkLink copied to clipboard!
Fencing configuration ensures that a malfunctioning node on your AWS cluster is automatically isolated, which prevents the node from consuming the cluster’s resources or compromising the cluster’s functionality.
To configure fencing on an AWS cluster, you can use multiple methods:
- A standard procedure for default configuration.
- An alternate configuration procedure for more advanced configuration, focused on automation.
Prerequisites
-
You must be using the
fence_awsfencing agent. To obtainfence_aws, install theresource-agentspackage on your cluster.
Standard procedure
Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.
echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
[root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to configure the fence device. Use the
pcmk_host_mapcommand to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you previously set up.pcs stonith \ create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \ region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4# pcs stonith \ create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \ region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs stonith \ create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
[root@ip-10-0-0-48 ~]# pcs stonith \ create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.
Alternate procedure
Obtain the VPC ID of the cluster.
aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId'
# aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId' vpc-06bc10ac8f6006664Copy to Clipboard Copied! Toggle word wrap Toggle overflow By using the VPC ID of the cluster, obtain the VPC instances.
aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[? Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]"$ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[? Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]" i-0b02af8927a895137 <clustername>-nodea-vm i-0cceb4ba8ab743b69 <clustername>-nodeb-vm i-0502291ab38c762a5 <clustername>-nodec-vmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the obtained instance IDs to configure fencing on each node on the cluster. For example, to configure a fencing device on all nodes in a cluster:
CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \ in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \ pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX[root@nodea ~]# CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \ in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \ pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXCopy to Clipboard Copied! Toggle word wrap Toggle overflow For information about specific parameters for creating fencing devices, see the
fence_awsman page or the Configuring and managing high availability clusters guide.- To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.
Verification
Display the configured fencing devices and their parameters on your nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test the fencing agent for one of the cluster nodes.
pcs stonith fence <awsnodename>
# pcs stonith fence <awsnodename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command response may take several minutes to display. If you watch the active terminal session for the node being fenced, you see that the terminal connection is immediately terminated after you enter the fence command.
Example:
pcs stonith fence ip-10-0-0-58
[root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fencedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify that the node is fenced.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the node that was fenced in the previous step.
pcs cluster start <awshostname>
# pcs cluster start <awshostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify the node started.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10. Installing the AWS CLI on cluster nodes Copy linkLink copied to clipboard!
Previously, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes before you configure the network resource agents.
Complete the following procedure on each cluster node.
Prerequisites
- You must have created an AWS Access Key and AWS Secret Access Key. See Creating the AWS Access Key and AWS Secret Access Key for more information.
Procedure
- Install the AWS CLI. For instructions, see Installing the AWS CLI.
Verify that the AWS CLI is configured properly. The instance IDs and instance names should display.
Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11. Setting up IP address resources on AWS Copy linkLink copied to clipboard!
To ensure that clients that use IP addresses to access resources managed by the cluster over the network can access the resources if a failover occurs, the cluster must include IP address resources, which use specific network resource agents.
The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. To decide which resource agent to configure, consider the type of AWS IP addresses that you want the HA cluster to manage:
-
If you need to manage an IP address exposed to the internet, use the
awseipnetwork resource. -
If you need to manage a private IP address limited to a single AWS Availability Zone (AZ), use the
awsvipandIPaddr2network resources. -
If you need to manage an IP address that can move across multiple AWS AZs within the same AWS region, use the
aws-vpc-move-ipnetwork resource.
If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with your AWS provider.
4.11.1. Creating an IP address resource to manage an IP address exposed to the internet Copy linkLink copied to clipboard!
To ensure that high-availability (HA) clients can access a RHEL 8 node that uses public-facing internet connections, configure an AWS Secondary Elastic IP Address (awseip) resource to use an elastic IP address.
Prerequisites
- You have a previously configured cluster.
- Your cluster nodes must have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
Procedure
Install the
resource-agentspackage.yum install resource-agents
# yum install resource-agentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using the AWS command-line interface (CLI), create an elastic IP address.
aws ec2 allocate-address --domain vpc --output text
[root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Display the description of
awseip. This shows the options and default operations for this agent.pcs resource describe awseip
# pcs resource describe awseipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Secondary Elastic IP address resource that uses the allocated IP address that you previously specified using the AWS CLI. In addition, create a resource group that the Secondary Elastic IP address will belong to.
pcs resource create <resource-id> awseip elastic_ip=<Elastic-IP-Address> allocation_id=<Elastic-IP-Association-ID> --group networking-group
# pcs resource create <resource-id> awseip elastic_ip=<Elastic-IP-Address> allocation_id=<Elastic-IP-Association-ID> --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group
# pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the status of the cluster to verify that the required resources are running.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following output shows an example running cluster where the
vipandelasticresources have been started as a part of thenetworking-groupresource group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Launch an SSH session from your local workstation to the elastic IP address that you previously created.
ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>
$ ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
$ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the host to which you connected via SSH is the host associated with the elastic resource created.
4.11.2. Creating an IP address resource to manage a private IP address limited to a single AWS Availability Zone Copy linkLink copied to clipboard!
To ensure that high-availability (HA) clients on AWS can access a RHEL 8 node that uses a a private IP address that can only move in a single AWS Availability Zone (AZ), configure an AWS Secondary Private IP Address (awsvip) resource to use a virtual IP address.
You can complete the following procedure on any node in the cluster.
Prerequisites
- You have a previously configured cluster.
- Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
Procedure
Install the
resource-agentspackage.yum install resource-agents
# yum install resource-agentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: View the
awsvipdescription. This shows the options and default operations for this agent.pcs resource describe awsvip
# pcs resource describe awsvipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Secondary Private IP address with an unused private IP address in the
VPC CIDRblock. In addition, create a resource group that the Secondary Private IP address will belong to.pcs resource create <resource-id> awsvip secondary_private_ip=<Unused-IP-Address> --group <group-name>
# pcs resource create <resource-id> awsvip secondary_private_ip=<Unused-IP-Address> --group <group-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
[root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the previous step.
pcs resource create <resource-id> IPaddr2 ip=<secondary-private-IP> --group <group-name>
# pcs resource create <resource-id> IPaddr2 ip=<secondary-private-IP> --group <group-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the status of the cluster to verify that the required resources are running.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following output shows an example running cluster where the
vipandprivipresources have been started as a part of thenetworking-groupresource group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11.3. Creating an IP address resource to manage an IP address that can move across multiple AWS Availability Zones Copy linkLink copied to clipboard!
To ensure that high-availability (HA) clients on AWS can access a RHEL 8 node that can be moved across multiple AWS Availability Zones within the same AWS region, configure an aws-vpc-move-ip resource to use an elastic IP address.
Prerequisites
- You have a previously configured cluster.
- Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
An Identity and Access Management (IAM) user is configured on your cluster and has the following permissions:
- Modify routing tables
- Create security groups
- Create IAM policies and roles
Procedure
Install the
resource-agentspackage.yum install resource-agents
# yum install resource-agentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: View the
aws-vpc-move-ipdescription. This shows the options and default operations for this agent.pcs resource describe aws-vpc-move-ip
# pcs resource describe aws-vpc-move-ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set up an
OverlayIPAgentIAM policy for the IAM user.-
In the AWS console, navigate to Services
IAM Policies Create OverlayIPAgentPolicy Input the following configuration, and change the <region>, <account-id>, and <ClusterRouteTableID> values to correspond with your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
In the AWS console, navigate to Services
In the AWS console, disable the
Source/Destination Checkfunction on all nodes in the cluster.To do this, right-click each node
Networking Change Source/Destination Checks. In the pop-up message that appears, click Yes, Disable. Create a route table for the cluster. To do so, use the following command on one node in the cluster:
aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>
# aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the command, replace values as follows:
-
ClusterRouteTableID: The route table ID for the existing cluster VPC route table. -
NewCIDRblockIP/NetMask: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is172.31.0.0/16, the new IP address/netmask can be192.168.0.15/32. -
ClusterNodeID: The instance ID for another node in the cluster.
-
On one of the nodes in the cluster, create a
aws-vpc-move-ipresource that uses a free IP address that is accessible to the client. The following example creates a resource namedvpcipthat uses IP192.168.0.15.pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
# pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow On all nodes in the cluster, edit the
/etc/hosts/file, and add a line with the IP address of the newly created resource. For example:192.168.0.15 vpcip
192.168.0.15 vpcipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Test the failover ability of the new
aws-vpc-move-ipresource:pcs resource move vpcip
# pcs resource move vpcipCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the failover succeeded, remove the automatically created constraint after the move of the
vpcipresource:pcs resource clear vpcip
# pcs resource clear vpcipCopy to Clipboard Copied! Toggle word wrap Toggle overflow