Search

Chapter 4. Configuring a Red Hat High Availability cluster on AWS

download PDF

To create a cluster where RHEL nodes automatically redistribute their workloads if a node failure occurs, use the Red Hat High Availability Add-On. Such high availability (HA) clusters can also be hosted on public cloud platforms, including AWS. Creating RHEL HA clusters on AWS is similar to creating HA clusters in non-cloud environments.

To configure a Red Hat HA cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes, see the following sections. Note that you have a number of options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.

Prerequisites

4.1. The benefits of using high-availability clusters on public cloud platforms

A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.

You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:

  • Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
  • Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
  • Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
  • Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.

To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.

Additional resources

4.2. Creating the AWS Access Key and AWS Secret Access Key

You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.

Prerequisites

Procedure

  1. Launch the AWS Console.
  2. Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
  3. Click Users.
  4. Select the user and open the Summary screen.
  5. Click the Security credentials tab.
  6. Click Create access key.
  7. Download the .csv file (or save both keys). You need to enter these keys when creating the fencing device.

4.3. Installing the AWS CLI

Many of the procedures required to manage HA clusters in AWS include using the AWS CLI.

Prerequisites

  • You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.

Procedure

  1. Install the AWS command line tools by using the yum command.

    # yum install awscli
  2. Use the aws --version command to verify that you installed the AWS CLI.

    $ aws --version
    aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77
  3. Configure the AWS command line client according to your AWS access details.

    $ aws configure
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name [None]:
    Default output format [None]:

4.4. Creating an HA EC2 instance

Complete the following steps to create the instances that you use as your HA cluster nodes. Note that you have a number of options for obtaining the RHEL images you use for your cluster. See Red Hat Enterprise Linux Image options on AWS for information about image options for AWS.

You can create and upload a custom image that you use for your cluster nodes, or you can use a Gold Image or an on-demand image.

Prerequisites

Procedure

  1. From the AWS EC2 Dashboard, select Images and then AMIs.
  2. Right-click on your image and select Launch.
  3. Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance may need to have higher capacity.

    See Amazon EC2 Instance Types for information about instance types.

  4. Click Next: Configure Instance Details.

    1. Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.

      Note

      Do not launch into an Auto Scaling Group.

    2. For Network, select the VPC you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
    3. Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you may need to make additional selections.

      Note

      These are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.

  5. Click Next: Add Storage and verify that the default storage is sufficient. You do not need to modify these settings unless your HA application requires other storage options.
  6. Click Next: Add Tags.

    Note

    Tags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.

  7. Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
  8. Click Review and Launch and verify your selections.
  9. Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when Setting up the AWS environment.
  10. Click Launch Instances.
  11. Click View Instances. You can name the instance(s).

    Note

    Alternatively, you can launch instances by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.

4.5. Configuring the private key

Complete the following configuration tasks to use the private SSH key file (.pem) before it can be used in an SSH session.

Procedure

  1. Move the key file from the Downloads directory to your Home directory or to your ~/.ssh directory.
  2. Change the permissions of the key file so that only the root user can read it.

    # chmod 400 KeyName.pem

4.6. Connecting to an EC2 instance

Using the AWS Console on all nodes, you can connect to an EC2 instance.

Procedure

  1. Launch the AWS Console and select the EC2 instance.
  2. Click Connect and select A standalone SSH client.
  3. From your SSH terminal session, connect to the instance by using the AWS example provided in the pop-up window. Add the correct path to your KeyName.pem file if the path is not shown in the example.

4.7. Installing the High Availability packages and agents

On each of the nodes, you need to install the High Availability packages and agents to be able to configure a Red Hat High Availability cluster on AWS.

Procedure

  1. Remove the AWS Red Hat Update Infrastructure (RHUI) client.

    $ sudo -i
    # yum -y remove rh-amazon-rhui-client*
  2. Register the VM with Red Hat.

    # subscription-manager register --auto-attach
  3. Disable all repositories.

    # subscription-manager repos --disable=*
  4. Enable the RHEL 8 Server HA repositories.

    # subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
  5. Update the RHEL AWS instance.

    # yum update -y
  6. Install the Red Hat High Availability Add-On software packages, along with the AWS fencing agent from the High Availability channel.

    # yum install pcs pacemaker fence-agents-aws
  7. The user hacluster was created during the pcs and pacemaker installation in the previous step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  8. Add the high availability service to the RHEL Firewall if firewalld.service is installed.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload
  9. Start the pcs service and enable it to start on boot.

    # systemctl start pcsd.service
    # systemctl enable pcsd.service
  10. Edit /etc/hosts and add RHEL host names and internal IP addresses. See How should the /etc/hosts file be set up on RHEL cluster nodes? for details.

Verification

  • Ensure the pcs service is running.

    # systemctl status pcsd.service
    
    pcsd.service - PCS GUI and remote configuration interface
    Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago
    Docs: man:pcsd(8)
    man:pcs(8)
    Main PID: 5437 (pcsd)
    CGroup: /system.slice/pcsd.service
         └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
    Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface…
    Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.

4.8. Creating a cluster

Complete the following steps to create the cluster of nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user hacluster. In the command, specify the name of each node in the cluster.

    # pcs host auth <hostname1> <hostname2> <hostname3>

    Example:

    [root@node01 clouduser]# pcs host auth node01 node02 node03
    Username: hacluster
    Password:
    node01: Authorized
    node02: Authorized
    node03: Authorized
  2. Create the cluster.

    # pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>

    Example:

    [root@node01 clouduser]# pcs cluster setup new_cluster node01 node02 node03
    
    [...]
    
    Synchronizing pcsd certificates on nodes node01, node02, node03...
    node02: Success
    node03: Success
    node01: Success
    Restarting pcsd on the nodes in order to reload the certificates...
    node02: Success
    node03: Success
    node01: Success

Verification

  1. Enable the cluster.

    [root@node01 clouduser]# pcs cluster enable --all
    node02: Cluster Enabled
    node03: Cluster Enabled
    node01: Cluster Enabled
  2. Start the cluster.

    [root@node01 clouduser]# pcs cluster start --all
    node02: Starting Cluster...
    node03: Starting Cluster...
    node01: Starting Cluster...

4.9. Configuring fencing

Fencing configuration ensures that a malfunctioning node on your AWS cluster is automatically isolated, which prevents the node from consuming the cluster’s resources or compromising the cluster’s functionality.

To configure fencing on an AWS cluster, you can use multiple methods:

  • A standard procedure for default configuration.
  • An alternate configuration procedure for more advanced configuration, focused on automation.

Prerequisites

  • You must be using the fence_aws fencing agent. To obtain fence_aws, install the resource-agents package on your cluster.

Standard procedure

  1. Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.

    # echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)

    Example:

    [root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
  2. Enter the following command to configure the fence device. Use the pcmk_host_map command to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you previously set up.

    # pcs stonith \
        create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \
        region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \
        power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith \
    create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \
    region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \
    power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4

Alternate procedure

  1. Obtain the VPC ID of the cluster.

    # aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId'
    vpc-06bc10ac8f6006664
  2. By using the VPC ID of the cluster, obtain the VPC instances.

    $ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[? Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]"
    i-0b02af8927a895137     <clustername>-nodea-vm
    i-0cceb4ba8ab743b69     <clustername>-nodeb-vm
    i-0502291ab38c762a5     <clustername>-nodec-vm
  3. Use the obtained instance IDs to configure fencing on each node on the cluster. For example, to configure a fencing device on all nodes in a cluster:

    [root@nodea ~]# CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \
    in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \
    pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    For information about specific parameters for creating fencing devices, see the fence_aws man page or the Configuring and managing high availability clusters guide.

Verification

  1. Display the configured fencing devices and their parameters on your nodes:

    [root@nodea ~]# pcs stonith config fence${CLUSTER}
    
    Resource: <clustername> (class=stonith type=fence_aws)
    Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5;
    pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Operations: monitor interval=60s (<clustername>-monitor-interval-60s)
  2. Test the fencing agent for one of the cluster nodes.

    # pcs stonith fence <awsnodename>
    Note

    The command response may take several minutes to display. If you watch the active terminal session for the node being fenced, you see that the terminal connection is immediately terminated after you enter the fence command.

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58
    
    Node: ip-10-0-0-58 fenced
  3. Check the status to verify that the node is fenced.

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 19:55:41 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ]
    OFFLINE: [ ip-10-0-0-58 ]
    
    Full list of resources:
    clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
    corosync: active/disabled
    pacemaker: active/disabled
    pcsd: active/enabled
  4. Start the node that was fenced in the previous step.

    # pcs cluster start <awshostname>
  5. Check the status to verify the node started.

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 20:01:31 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
      clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

4.10. Installing the AWS CLI on cluster nodes

Previously, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes before you configure the network resource agents.

Complete the following procedure on each cluster node.

Prerequisites

Procedure

  1. Install the AWS CLI. For instructions, see Installing the AWS CLI.
  2. Verify that the AWS CLI is configured properly. The instance IDs and instance names should display.

    Example:

    [root@ip-10-0-0-48 ~]# aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==Name].Value]'
    i-07f1ac63af0ec0ac6
    ip-10-0-0-48
    i-063fc5fe93b4167b2
    ip-10-0-0-46
    i-08bd39eb03a6fd2c7
    ip-10-0-0-58

4.11. Setting up IP address resources on AWS

To ensure that clients that use IP addresses to access resources managed by the cluster over the network can access the resources if a failover occurs, the cluster must include IP address resources, which use specific network resource agents.

The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. To decide which resource agent to configure, consider the type of AWS IP addresses that you want the HA cluster to manage:

Note

If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with your AWS provider.

4.11.1. Creating an IP address resource to manage an IP address exposed to the internet

To ensure that high-availability (HA) clients can access a RHEL 8 node that uses public-facing internet connections, configure an AWS Secondary Elastic IP Address (awseip) resource to use an elastic IP address.

Prerequisites

Procedure

  1. Install the resource-agents package.

    # yum install resource-agents
  2. Using the AWS command-line interface (CLI), create an elastic IP address.

    [root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text
    
    eipalloc-4c4a2c45   vpc 35.169.153.122
  3. Optional: Display the description of awseip. This shows the options and default operations for this agent.

    # pcs resource describe awseip
  4. Create the Secondary Elastic IP address resource that uses the allocated IP address that you previously specified using the AWS CLI. In addition, create a resource group that the Secondary Elastic IP address will belong to.

    # pcs resource create <resource-id> awseip elastic_ip=<Elastic-IP-Address> allocation_id=<Elastic-IP-Association-ID> --group networking-group

    Example:

    # pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group

Verification

  1. Display the status of the cluster to verify that the required resources are running.

    # pcs status

    The following output shows an example running cluster where the vip and elastic resources have been started as a part of the networking-group resource group:

    [root@ip-10-0-0-58 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Mon Mar  5 16:27:55 2018
    Last change: Mon Mar  5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    4 resources configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started ip-10-0-0-46
     Resource Group: networking-group
         vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48
         elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled
  2. Launch an SSH session from your local workstation to the elastic IP address that you previously created.

    $ ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>

    Example:

    $ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
  3. Verify that the host to which you connected via SSH is the host associated with the elastic resource created.

4.11.2. Creating an IP address resource to manage a private IP address limited to a single AWS Availability Zone

To ensure that high-availability (HA) clients on AWS can access a RHEL 8 node that uses a a private IP address that can only move in a single AWS Availability Zone (AZ), configure an AWS Secondary Private IP Address (awsvip) resource to use a virtual IP address.

You can complete the following procedure on any node in the cluster.

Prerequisites

Procedure

  1. Install the resource-agents package.

    # yum install resource-agents
  2. Optional: View the awsvip description. This shows the options and default operations for this agent.

    # pcs resource describe awsvip
  3. Create a Secondary Private IP address with an unused private IP address in the VPC CIDR block. In addition, create a resource group that the Secondary Private IP address will belong to.

    # pcs resource create <resource-id> awsvip secondary_private_ip=<Unused-IP-Address> --group <group-name>

    Example:

    [root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
  4. Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the previous step.

    # pcs resource create <resource-id> IPaddr2 ip=<secondary-private-IP> --group <group-name>

    Example:

    root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group

Verification

  • Display the status of the cluster to verify that the required resources are running.

    # pcs status

    The following output shows an example running cluster where the vip and privip resources have been started as a part of the networking-group resource group:

    [root@ip-10-0-0-48 ~]# pcs status
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 22:34:24 2018
    Last change: Fri Mar  2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    3 resources configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
    clusterfence    (stonith:fence_aws):    Started ip-10-0-0-46
     Resource Group: networking-group
         privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48
         vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

4.11.3. Creating an IP address resource to manage an IP address that can move across multiple AWS Availability Zones

To ensure that high-availability (HA) clients on AWS can access a RHEL 8 node that can be moved across multiple AWS Availability Zones within the same AWS region, configure an aws-vpc-move-ip resource to use an elastic IP address.

Prerequisites

  • You have a previously configured cluster.
  • Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
  • You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
  • An Identity and Access Management (IAM) user is configured on your cluster and has the following permissions:

    • Modify routing tables
    • Create security groups
    • Create IAM policies and roles

Procedure

  1. Install the resource-agents package.

    # yum install resource-agents
  2. Optional: View the aws-vpc-move-ip description. This shows the options and default operations for this agent.

    # pcs resource describe aws-vpc-move-ip
  3. Set up an OverlayIPAgent IAM policy for the IAM user.

    1. In the AWS console, navigate to Services IAM Policies Create OverlayIPAgent Policy
    2. Input the following configuration, and change the <region>, <account-id>, and <ClusterRouteTableID> values to correspond with your cluster.

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "Stmt1424870324000",
                  "Effect": "Allow",
                  "Action":  "ec2:DescribeRouteTables",
                  "Resource": "*"
              },
              {
                  "Sid": "Stmt1424860166260",
                  "Action": [
                      "ec2:CreateRoute",
                      "ec2:ReplaceRoute"
                  ],
                  "Effect": "Allow",
                  "Resource": "arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>"
              }
          ]
      }
  4. In the AWS console, disable the Source/Destination Check function on all nodes in the cluster.

    To do this, right-click each node Networking Change Source/Destination Checks. In the pop-up message that appears, click Yes, Disable.

  5. Create a route table for the cluster. To do so, use the following command on one node in the cluster:

    # aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>

    In the command, replace values as follows:

    • ClusterRouteTableID: The route table ID for the existing cluster VPC route table.
    • NewCIDRblockIP/NetMask: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is 172.31.0.0/16, the new IP address/netmask can be 192.168.0.15/32.
    • ClusterNodeID: The instance ID for another node in the cluster.
  6. On one of the nodes in the cluster, create a aws-vpc-move-ip resource that uses a free IP address that is accessible to the client. The following example creates a resource named vpcip that uses IP 192.168.0.15.

    # pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
  7. On all nodes in the cluster, edit the /etc/hosts/ file, and add a line with the IP address of the newly created resource. For example:

    192.168.0.15 vpcip

Verification

  1. Test the failover ability of the new aws-vpc-move-ip resource:

    # pcs resource move vpcip
  2. If the failover succeeded, remove the automatically created constraint after the move of the vpcip resource:

    # pcs resource clear vpcip

Additional resources

4.11.4. Additional resources

4.12. Configuring shared block storage

To create extra storage resources, you can configure shared block storage for a Red Hat High Availability cluster by using Amazon Elastic Block Storage (EBS) Multi-Attach volumes. Note that this procedure is optional, and the steps below assume three instances (a three-node cluster) with a 1 TB shared disk.

Prerequisites

Procedure

  1. Create a shared block volume by using the AWS command create-volume.

    $ aws ec2 create-volume --availability-zone <availability_zone> --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled

    For example, the following command creates a volume in the us-east-1a availability zone.

    $ aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled
    
    {
        "AvailabilityZone": "us-east-1a",
        "CreateTime": "2020-08-27T19:16:42.000Z",
        "Encrypted": false,
        "Size": 1024,
        "SnapshotId": "",
        "State": "creating",
        "VolumeId": "vol-042a5652867304f09",
        "Iops": 51200,
        "Tags": [ ],
        "VolumeType": "io1"
    }
    Note

    You need the VolumeId in the next step.

  2. For each instance in your cluster, attach a shared block volume by using the AWS command attach-volume. Use your <instance_id> and <volume_id>.

    $ aws ec2 attach-volume --device /dev/xvdd --instance-id <instance_id> --volume-id <volume_id>

    For example, the following command attaches a shared block volume vol-042a5652867304f09 to instance i-0eb803361c2c887f2.

    $ aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09
    
    {
        "AttachTime": "2020-08-27T19:26:16.086Z",
        "Device": "/dev/xvdd",
        "InstanceId": "i-0eb803361c2c887f2",
        "State": "attaching",
        "VolumeId": "vol-042a5652867304f09"
    }

Verification

  1. For each instance in your cluster, verify that the block device is available by using the ssh command with your instance <ip_address>.

    # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T '"

    For example, the following command lists details including the host name and block device for the instance IP 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '"
    
    nodea
    nvme2n1 259:1    0   1T  0 disk
  2. Use the ssh command to verify that each instance in your cluster uses the same shared disk.

    # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"

    For example, the following command lists details including the host name and shared disk volume ID for the instance IP address 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"
    
    nodea
    E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7

4.13. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.