Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Configuring a Red Hat High Availability cluster on AWS


To redistribute workloads automatically in case of node failure, you can create Red Hat High Availability (HA) clusters on Amazon Web Services (AWS). Even on AWS, you can host these HA clusters.

Creating RHEL HA clusters on AWS is similar to creating HA clusters in non-cloud environments. For details on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.

A high-availability (HA) cluster is a set of computers, also known as nodes, linked together to run a specific workload. The purpose of HA clusters is to offer redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes. No noticeable downtime occurs in the services that are running on the cluster.

You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:

  • Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
  • Scalability: You can start additional nodes when demand is high and stop them when demand is low.
  • Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
  • Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.

To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.

4.2. Creating the AWS Access Key and AWS Secret Access Key

Before installing the AWS CLI, you must create an AWS Access Key and AWS Secret Access Key. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.

Prerequisites

Procedure

  1. Launch the AWS Console.
  2. Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
  3. Click Users.
  4. Select the user and open the Summary screen.
  5. Click the Security credentials tab.
  6. Click Create access key.
  7. Download the .csv file (or save both keys). You need to enter these keys when creating the fencing device.

4.3. Creating an HA EC2 instance

To ensure High Availability (HA) for your Red Hat Enterprise Linux (RHEL) cluster nodes and applications in Amazon Web Services (AWS), you can create HA EC2 instances configured as cluster nodes.

For details about obtaining RHEL images, see Image options on AWS.

Prerequisites

Procedure

  1. From the AWS EC2 Dashboard, select Images and then AMIs.
  2. Right-click the image you want to use and select Launch.
  3. Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance requires different capacity.

    See Amazon EC2 Instance Types for information about instance types.

  4. Click Next: Configure Instance Details.

    1. Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.

      Note

      Do not launch into an Auto Scaling Group.

    2. For Network, select the virtual private cloud (VPC) you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
    3. Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you can make additional selections.

      Note

      These are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.

  5. Click Next: Add Storage and verify that you have required storage for your HA application. You do not need to change these settings, unless your HA application requires other storage options.
  6. Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
  7. Click Review and Launch and verify your selections.
  8. Click Launch. Select an existing key pair or create a new key pair. For selecting a key pair, see Setting up the AWS environment.
  9. Click Launch Instances.
  10. Click View Instances. You can name the instance(s).

    Note

    Also, you can launch instances by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.

4.4. Configuring the private key

Before using the the private SSH key file (.pem) for SSH communication, you need to configure permissions of the private key.

Prerequisites

Procedure

  1. Move the key file from the Downloads directory to your Home directory or to your ~/.ssh directory.
  2. Change the permissions of the key file so that only the root user can read it:

    # chmod 400 KeyName.pem

4.5. Connecting to an EC2 instance

Using the AWS Console on all nodes, you can connect to an EC2 instance.

Prerequisites

Procedure

  1. Launch the AWS Console and select the EC2 instance.
  2. Click Connect and select A standalone SSH client.
  3. From your SSH terminal session, connect to the instance by using the AWS example provided in the pop-up window. Add the correct path to your KeyName.pem file if the path is not shown in the example.

4.6. Installing the High Availability packages and agents

Before configuring a Red Hat High Availability cluster on AWS, you must install the High Availability packages and agents on each of the nodes.

Prerequisites

Procedure

  1. Remove the AWS Red Hat Update Infrastructure (RHUI) client.

    $ sudo -i
    # yum -y remove rh-amazon-rhui-client*
  2. Register the VM with Red Hat.

    # subscription-manager register
  3. Disable all repositories.

    # subscription-manager repos --disable=*
  4. Enable the RHEL 8 Server HA repositories.

    # subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
  5. Update the RHEL AWS instance.

    # yum update -y
  6. Install the Red Hat High Availability Add-On software packages, along with the AWS fencing agent from the High Availability channel.

    # yum install pcs pacemaker fence-agents-aws
  7. The user hacluster was created during the pcs and pacemaker installation in the previous step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  8. Add the high availability service to the RHEL Firewall if firewalld.service is installed.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload
  9. Start the pcs service and enable it to start on boot.

    # systemctl start pcsd.service
    # systemctl enable pcsd.service
  10. Edit /etc/hosts and add RHEL host names and internal IP addresses. For more information, see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes?.

Verification

  • Ensure the pcs service is running.

    # systemctl status pcsd.service
    
    pcsd.service - PCS GUI and remote configuration interface
    Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago
    Docs: man:pcsd(8)
    man:pcs(8)
    Main PID: 5437 (pcsd)
    CGroup: /system.slice/pcsd.service
         └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
    Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface…
    Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.

4.7. Creating a cluster

Create a Red Hat High Availability cluster on a public cloud platform by configuring and initializing the cluster nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user hacluster. In the command, specify the name of each node in the cluster.

    # pcs host auth <hostname1> <hostname2> <hostname3>

    Example:

    [root@node01 clouduser]# pcs host auth node01 node02 node03
    Username: hacluster
    Password:
    node01: Authorized
    node02: Authorized
    node03: Authorized
  2. Create the cluster.

    # pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>

    Example:

    [root@node01 clouduser]# pcs cluster setup new_cluster node01 node02 node03
    
    [...]
    
    Synchronizing pcsd certificates on nodes node01, node02, node03...
    node02: Success
    node03: Success
    node01: Success
    Restarting pcsd on the nodes in order to reload the certificates...
    node02: Success
    node03: Success
    node01: Success

Verification

  1. Enable the cluster.

    [root@node01 clouduser]# pcs cluster enable --all
    node02: Cluster Enabled
    node03: Cluster Enabled
    node01: Cluster Enabled
  2. Start the cluster.

    [root@node01 clouduser]# pcs cluster start --all
    node02: Starting Cluster...
    node03: Starting Cluster...
    node01: Starting Cluster...

4.8. Configuring fencing on a RHEL AWS cluster

Fencing configuration automatically isolates a malfunctioning node on your Red Hat Enterprise Linux (RHEL) Amazon Web Services (AWS) cluster to prevent the node from compromising functionality and consuming the resources of the cluster.

To configure fencing on an AWS cluster, use one of the following methods:

  • A standard procedure for default configuration.
  • An alternate configuration procedure for more advanced configuration, focused on automation.

4.8.1. Configuring fencing with default settings

Fencing isolates malfunctioned or unresponsive nodes for data integrity and cluster availability by using Amazon Web Services (AWS) resources and cluster management tools for automated node management. A standard approach for configuring fencing with default settings in a Red Hat Enterprise Linux (RHEL) high availability cluster on Amazon Web Services (AWS).

Prerequisites

Procedure

  1. Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.

    # echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)

    Example:

    [root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
  2. Enter the following command to configure the fence device. Use the pcmk_host_map command to map the RHEL hostname to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you earlier set up.

    # pcs stonith \
        create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \
        region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \
        power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith \
    create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \
    region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \
    power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
  3. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Testing a fence device

4.8.2. Configuring fencing for a VPC cluster

An alternate approach for configuring fencing for a virtual private cloud (VPC) cluster in a Red Hat Enterprise Linux (RHEL) high availability cluster on Amazon Web Services (AWS). Fencing isolates malfunctioned or unresponsive nodes to keep data integrity and cluster availability, using AWS resources and cluster management tools for automated node management.

Prerequisites

Procedure

  1. Obtain the VPC ID of the cluster.

    $ aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId'
    vpc-06bc10ac8f6006664
  2. By using the VPC ID of the cluster, obtain the VPC instances.

    $ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[?Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]"
    
    i-0b02af8927a895137     <clustername>-nodea-vm
    i-0cceb4ba8ab743b69     <clustername>-nodeb-vm
    i-0502291ab38c762a5     <clustername>-nodec-vm
  3. Use the obtained instance IDs to configure fencing on each node on the cluster. For example, to configure a fencing device on all nodes in a cluster:

    [root@nodea ~]# CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \
    in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \
    pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    For information about specific parameters for creating fencing devices, see the fence_aws man page or the Configuring and managing high availability clusters guide.

  4. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.

Verification

  1. Display the configured fencing devices and their parameters on your nodes:

    [root@nodea ~]# pcs stonith config fence${CLUSTER}
    
    Resource: <clustername> (class=stonith type=fence_aws)
    Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5;
    pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Operations: monitor interval=60s (<clustername>-monitor-interval-60s)
  2. Test the fencing agent for one of the cluster nodes.

    # pcs stonith fence <awsnodename>
    Note

    The command response might take several minutes to display. If you check the active terminal session for the fencing node, you might see the connection to the terminal drop immediately after you enter the fence command.

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58
    
    Node: ip-10-0-0-58 fenced
  3. Check the status of the fenced node:

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 19:55:41 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ]
    OFFLINE: [ ip-10-0-0-58 ]
    
    Full list of resources:
    clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
    corosync: active/disabled
    pacemaker: active/disabled
    pcsd: active/enabled
  4. Start the fenced node from the earlier step:

    # pcs cluster start <awshostname>
  5. Check the status to verify the node started.

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 20:01:31 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
      clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

4.9. Installing the AWS CLI on cluster nodes

Earlier, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes to configure the network resource agents. The following steps are applicable to each node in the cluster.

Prerequisites

Procedure

  • Verify that the AWS CLI is configured correctly where the instance IDs and instance names should be displayed:

    Example:

    [root@ip-10-0-0-48 ~]# aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==Name].Value]'
    
    i-07f1ac63af0ec0ac6  ip-10-0-0-48
    i-063fc5fe93b4167b2  ip-10-0-0-46
    i-08bd39eb03a6fd2c7  ip-10-0-0-58

4.10. Setting up IP address resources on AWS

To ensure that clients that use IP addresses to access resources managed by the cluster over the network can access the resources if a failover occurs, the cluster must include IP address resources, which use specific network resource agents.

The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. To decide which resource agent to configure, consider the type of AWS IP addresses that you want the HA cluster to manage:

  • To manage an IP address exposed to the internet, use the awseip network resource.
  • To manage a private IP address limited to a single AWS Availability Zone (AZ), use the awsvip and IPaddr2 network resources.
  • To manage an IP address that can move across multiple AWS AZs within the same AWS region, use the aws-vpc-move-ip network resource.
Note

If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with your AWS provider.

Configure an Amazon Web Services (AWS) Secondary Elastic IP Address (awseip) resource. Use an elastic IP address for public-facing internet connections on Red Hat Enterprise Linux (RHEL) High Availability (HA) cluster nodes.

Prerequisites

Procedure

  1. Install the resource-agents package.

    # yum install resource-agents
  2. Using the AWS command-line interface (CLI), create an elastic IP address.

    [root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text
    
    eipalloc-4c4a2c45   vpc 35.169.153.122
  3. Optional: Display the description of awseip. This shows the options and default operations for this agent.

    # pcs resource describe awseip
  4. Create a resource group which has the secondary elastic IP address and allocated IP address that you earlier specified using the AWS CLI:

    # pcs resource create <resource_id> awseip elastic_ip=<elastic_ip_address> allocation_id=<elastic_ip_association_id> --group <resource_group_name>

Verification

  1. Display the status of the cluster to verify that the required resources are running.

    # pcs status

    The following output shows an example running cluster where the vip and elastic resources are part of the networking-group resource group:

    [root@ip-10-0-0-58 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Mon Mar  5 16:27:55 2018
    Last change: Mon Mar  5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    4 resources configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started ip-10-0-0-46
     Resource Group: networking-group
         vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48
         elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled
  2. Launch an SSH session from your local workstation to the elastic IP address that you created earlier:

    $ ssh -l <user_name> -i ~/.ssh/<keyname>.pem <elastic_ip_address>

    Example:

    $ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122

Verification

  • Verify that the SSH connected host is the same host as the one associated with the elastic resource created.

Configure an Amazon Web Services (AWS) secondary private IP address (awsvip) resource on a node of a Red Hat High Availability (HA) cluster. Use awsvip to limit the IP address to a single availability zone and HA clients.

You can connect and access HA clients to a Red Hat Enterprise Linux (RHEL) node that uses the private IP address.

Prerequisites

Procedure

  1. Install the resource-agents package.

    # yum install resource-agents
  2. Optional: View the awsvip description. This shows the options and default operations for this agent.

    # pcs resource describe awsvip
  3. Create a secondary private IP address with an unused private IP address in the virtual private cloud (VPC) classless inter-domain routing (CIDR) VPC CIDR block. In addition, create a resource group for the secondary private IP address:

    # pcs resource create <example_resource_id> awsvip secondary_private_ip=<example_unused_private_IP_address> --group <example_group_name>

    Example:

    [root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
  4. Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the earlier step:

    # pcs resource create <example_resource_id> IPaddr2 ip=<example_secondary_private_IP> --group <example_group_name>

    Example:

    root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group

    Verification

    • Display the status of the cluster to verify that the required resources are running.

      # pcs status

      The following output shows an example running cluster where the vip and privip resources are active in a the networking-group resource group:

      [root@ip-10-0-0-48 ~]# pcs status
      
      Cluster name: newcluster
      Stack: corosync
      Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
      Last updated: Fri Mar  2 22:34:24 2018
      Last change: Fri Mar  2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46
      
      3 nodes configured
      3 resources configured
      
      Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
      
      Full list of resources:
      
      clusterfence    (stonith:fence_aws):    Started ip-10-0-0-46
       Resource Group: networking-group
           privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48
           vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58
      
      Daemon Status:
        corosync: active/disabled
        pacemaker: active/disabled
        pcsd: active/enabled

Configure an aws-vpc-move-ip resource to use an elastic IP address. You can use this resource to ensure high-availability (HA) clients on Amazon Web Services (AWS) can access a Red Hat Enterprise Linux (RHEL) node that can be moved across multiple AWS Availability Zones within the same AWS region.

Prerequisites

Procedure

  1. Install the resource-agents package.

    # yum install resource-agents
  2. Optional: View the aws-vpc-move-ip description. This shows the options and default operations for this agent.

    # pcs resource describe aws-vpc-move-ip
  3. Set up an OverlayIPAgent IAM policy for the IAM user.

    1. In the AWS console, navigate to Services IAM Policies Create OverlayIPAgent Policy
    2. Input the following configuration, and change the <region>, <account-id>, and <ClusterRouteTableID> values to correspond with your cluster.

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "Stmt1424870324000",
                  "Effect": "Allow",
                  "Action":  "ec2:DescribeRouteTables",
                  "Resource": "*"
              },
              {
                  "Sid": "Stmt1424860166260",
                  "Action": [
                      "ec2:CreateRoute",
                      "ec2:ReplaceRoute"
                  ],
                  "Effect": "Allow",
                  "Resource": "arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>"
              }
          ]
      }
  4. In the AWS console, disable the Source/Destination Check function on all nodes in the cluster.

    To do this, right-click each node Networking Change Source/Destination Checks. In the pop-up message that appears, click Yes, Disable.

  5. Create a route table for the cluster. To do so, use the following command on one node in the cluster:

    # aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>

    In the command, replace values as follows:

    • ClusterRouteTableID: The route table ID for the existing cluster VPC route table.
    • NewCIDRblockIP/NetMask: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is 172.31.0.0/16, the new IP address/netmask can be 192.168.0.15/32.
    • ClusterNodeID: The instance ID for another node in the cluster.
  6. On one of the nodes in the cluster, create a aws-vpc-move-ip resource that uses a free IP address that is accessible to the client. The following example creates a resource named vpcip that uses IP 192.168.0.15.

    # pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
  7. On all nodes in the cluster, edit the /etc/hosts/ file, and add a line with the IP address of the newly created resource. For example:

    192.168.0.15 vpcip

Verification

  1. Test the failover ability of the new aws-vpc-move-ip resource:

    # pcs resource move vpcip
  2. If the failover succeeded, remove the automatically created constraint after the move of the vpcip resource:

    # pcs resource clear vpcip

4.11. Configuring shared block storage

To create storage resources, you can configure shared block storage for a Red Hat High Availability cluster by using Amazon Elastic Block Storage (EBS) for multi-attach volumes.

Prerequisites

Procedure

  1. Create a shared block volume by using the AWS command create-volume.

    $ aws ec2 create-volume --availability-zone <availability_zone> --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled

    For example, the following command creates a volume in the us-east-1a availability zone.

    $ aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled
    
    {
        "AvailabilityZone": "us-east-1a",
        "CreateTime": "2020-08-27T19:16:42.000Z",
        "Encrypted": false,
        "Size": 1024,
        "SnapshotId": "",
        "State": "creating",
        "VolumeId": "vol-042a5652867304f09",
        "Iops": 51200,
        "Tags": [ ],
        "VolumeType": "io1"
    }
    Note

    You need the VolumeId in the next step.

  2. For each instance in your cluster, attach a shared block volume by using the AWS command attach-volume. Use your <instance_id> and <volume_id>.

    $ aws ec2 attach-volume --device /dev/xvdd --instance-id <instance_id> --volume-id <volume_id>

    For example, the following command attaches a shared block volume vol-042a5652867304f09 to instance i-0eb803361c2c887f2.

    $ aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09
    
    {
        "AttachTime": "2020-08-27T19:26:16.086Z",
        "Device": "/dev/xvdd",
        "InstanceId": "i-0eb803361c2c887f2",
        "State": "attaching",
        "VolumeId": "vol-042a5652867304f09"
    }

Verification

  1. For each instance in your cluster, verify that the block device is available by using the ssh command with your instance <ip_address>.

    # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T '"

    For example, the following command lists details including the hostname and block device for the instance IP 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '"
    
    nodea
    nvme2n1 259:1    0   1T  0 disk
  2. Use the ssh command to verify that each instance in your cluster uses the same shared disk.

    # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"

    For example, the following command lists details including the hostname and shared disk volume ID for the instance IP address 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"
    
    nodea
    E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début