4.5. Setting up IP address resources on AWS


Clients use IP addresses to manage cluster resources across the network. To handle a failover of a cluster, include IP address resources in the cluster that use specific network resource agents. The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. Depending on the type of AWS IP addresses in the HA cluster management, you can decide the resource agent to configure. It includes the following ways to create a cluster resource for managing an IP address:

  • Exposed to the internet: Use the awseip network resource.
  • Limited to a single AWS Availability Zone (AZ): Use the awsvip and IPaddr2 network resources.
  • Reassigns to many AWS AZs within the same AWS region: Use the aws-vpc-move-ip network resource.

    注意

    If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with AWS.

To ensure that high-availability (HA) clients can access a RHEL node that uses public-facing internet connections, configure an AWS Secondary Elastic IP Address (awseip) resource to use an elastic IP address.

Prerequisites

Procedure

  1. Add the two resources to the same group that you have already created to enforce order and colocation constraints.
  2. Install the resource-agents package:

    # dnf install resource-agents
  3. Create an elastic IP address:

    [root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text
    eipalloc-4c4a2c45   vpc 35.169.153.122
  4. Optional: Display the description of awseip. This shows the options and default operations for this agent.

    # pcs resource describe awseip
  5. Create the Secondary Elastic IP address resource with the allocated IP address in the 2nd step:

    # pcs resource create <resource_id> awseip elastic_ip=<elastic_ip_address> allocation_id=<elastic_ip_association_id> --group networking-group

    Example:

    # pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group

Verification

  1. Verify the cluster status to ensure resources are available:

    [root@ip-10-0-0-58 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Mon Mar  5 16:27:55 2018
    Last change: Mon Mar  5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    4 resources configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started ip-10-0-0-46
     Resource Group: networking-group
         vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48
         elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

    In this example, newcluster is an active cluster where resources such as vip and elastic are part of the networking-group resource group.

  2. Launch an SSH session from your local workstation to the elastic IP address that you have already created:

    $ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
  3. Verify that the SSH connected host is same as the host with the elastic resources.

You can configure an AWS Secondary Private IP Address (awsvip) resource to use a virtual IP address. With awsvip, high-availability (HA) clients on AWS can access a RHEL node to use a private IP address accessible in only a single availability zone (AZ). You can complete the following procedure on any node in the cluster.

Prerequisites

Procedure

  1. Install the resource-agents package.

    # dnf install resource-agents
  2. Optional: View the options and default operations for awsvip:

    # pcs resource describe awsvip
  3. Create a Secondary Private IP address with an unused private IP address in the VPC CIDR block:

    [root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group

    Here, secondary private IP address is a part of gets included in a resource group

  4. Create a virtual IP resource with the vip resource ID and the networking-group group name:

    root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group

    This is a Virtual Private Cloud (VPC) IP address that maps from the fence node to the failover node, masking the failure of the fence node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the last step.

Verification

  • Verify the cluster status to ensure resources are available:

    [root@ip-10-0-0-48 ~]# pcs status
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 22:34:24 2018
    Last change: Fri Mar  2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    3 resources configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
    clusterfence    (stonith:fence_aws):    Started ip-10-0-0-46
     Resource Group: networking-group
         privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48
         vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

    In this example, newcluster is an active cluster where resources such as vip and elastic are part of the networking-group resource group.

You can configure The Red Hat Enterprise Linux (RHEL) Overlay IP (aws-vpc-move-ip) resource agent to use an elastic IP address. With aws-vpc-move-ip, you can manage a RHEL node by moving it across many availability zones (AZ) in a single region of AWS for high-availability (HA) clients.

Prerequisites

  • You have an already configured cluster.
  • Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
  • You have set up the AWS CLI. For instructions, see Installing AWSCLI2.
  • You have configured an Identity and Access Management (IAM) user on your cluster with the following permissions:

    • Modify routing tables
    • Create security groups
    • Create IAM policies and roles

Procedure

  1. Install the resource-agents package:

    # dnf install resource-agents
  2. Optional: View the options and default operations for awsvip:

    # pcs resource describe aws-vpc-move-ip
  3. Set up an OverlayIPAgent IAM policy for the IAM user.

    1. In the AWS console, navigate to Services IAM Policies Create OverlayIPAgent Policy
    2. Input the following configuration, and change the <region>, <account_id>, and <cluster_route_table_id> values to correspond with your cluster.

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "Stmt1424870324000",
                  "Effect": "Allow",
                  "Action":  "ec2:DescribeRouteTables",
                  "Resource": "*"
              },
              {
                  "Sid": "Stmt1424860166260",
                  "Action": [
                      "ec2:CreateRoute",
                      "ec2:ReplaceRoute"
                  ],
                  "Effect": "Allow",
                  "Resource": "arn:aws:ec2:_<region>_:_<account_id>_:route-table/_<cluster_route_table_id>_"
              }
          ]
      }
  4. In the AWS console, disable the Source/Destination Check function on all nodes in the cluster.

    To do this, right-click each node Networking Change Source/Destination Checks. In the pop-up message that appears, click Yes, Disable.

  5. Create a route table for the cluster. To do so, use the following command on one node in the cluster:

    # aws ec2 create-route --route-table-id <cluster_route_table_id> --destination-cidr-block <new_cidr_block_ip/net_mask> --instance-id <cluster_node_id>

    In the command, replace values as follows:

    • ClusterRouteTableID: The route table ID for the existing cluster Virtual Private Cloud (VPC) route table.
    • NewCIDRblockIP: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is 172.31.0.0/16, the new IP address or netmask can be 192.168.0.15/32.
    • ClusterNodeID: The instance ID for another node in the cluster.
  6. On one of the nodes in the cluster, create a aws-vpc-move-ip resource that uses a free IP address that is accessible to the client. The following example creates a resource named vpcip that uses IP 192.168.0.15.

    # pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<cluster_route_table_id>
  7. On all nodes in the cluster, edit the /etc/hosts/ file, and add a line with the IP address of the newly created resource. For example:

    192.168.0.15 vpcip

Verification

  1. Test the failover ability of the new aws-vpc-move-ip resource:

    # pcs resource move vpcip
  2. If the failover succeeded, remove the automatically created constraint after the move of the vpcip resource:

    # pcs resource clear vpcip
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部