Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Configuring Red Hat High Availability clusters on Microsoft Azure
Red Hat supports High Availability (HA) on Red Hat Enterprise Linux (RHEL) 7.4 and later versions. This chapter includes information and procedures for configuring a Red Hat HA cluster on Microsoft Azure using virtual machine (VM) instances as cluster nodes. The procedures in this chapter assume you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 7 images to use for your cluster. For more information on image options for Azure, see Red Hat Enterprise Linux Image Options on Azure.
This chapter includes prerequisite procedures for setting up your environment for Azure. Once you have set up your environment, you can create and configure Azure VM instances.
This chapter also includes procedures specific to the creation of HA clusters, which transform individual VM nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents.
This chapter refers to the Microsoft Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for more information.
Prerequisites
- You need to install the Azure command line interface (CLI). For more information, see Installing the Azure CLI.
- Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto Azure with full support from Red Hat.
Additional resources
2.1. Creating resources in Azure
Complete the following procedure to create an availability set. You need these resources to complete subsequent tasks in this chapter.
Procedure
Create an availability set. All cluster nodes must be in the same availability set.
$ az vm availability-set create --name _MyAvailabilitySet_ --resource-group _MyResourceGroup_
Example:
[clouduser@localhost]$ az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp { "additionalProperties": {}, "id": "/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1", "location": "southcentralus", "name": “rhelha-avset1", "platformFaultDomainCount": 2, "platformUpdateDomainCount": 5, ...omitted
Additional resources
2.2. Creating an Azure Active Directory Application
Complete the following procedures to create an Azure Active Directory (AD) Application. The Azure AD Application authorizes and automates access for HA operations for all nodes in the cluster.
Prerequisites
You need to install the Azure Command Line Interface (CLI).
Procedure
- Ensure you are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application.
Log in to your Azure account.
$ az login
Enter the following command to create the Azure AD Application. To use your own password, add the
--password
option to the command. Ensure that you create a strong password.$ az ad sp create-for-rbac --name _FencingApplicationName_ --role owner --scopes "/subscriptions/_SubscriptionID_/resourceGroups/_MyResourseGroup_"
Example:
[clouduser@localhost ~] $ az ad sp create-for-rbac --name FencingApp --role owner --scopes "/subscriptions/2586c64b-xxxxxx-xxxxxxx-xxxxxxx/resourceGroups/azrhelclirsgrp" Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 { "appId": "1a3dfe06-df55-42ad-937b-326d1c211739", "displayName": "FencingApp", "name": "http://FencingApp", "password": "43a603f0-64bb-482e-800d-402efe5f3d47", "tenant": "77ecefb6-xxxxxxxxxx-xxxxxxx-757a69cb9485" }
Save the following information before proceeding. You need this information to set up the fencing agent.
- Azure AD Application ID
- Azure AD Application Password
- Tenant ID
- Microsoft Azure Subscription ID
Additional resources
2.3. Installing the Red Hat HA packages and agents
Complete the following steps on all nodes.
Procedure
Register the VM with Red Hat.
$ sudo -i # subscription-manager register --auto-attach
Disable all repositories.
# subscription-manager repos --disable=*
Enable the RHEL 7 Server and RHEL 7 Server HA repositories.
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
Update all packages.
# yum update -y
Reboot if the kernel is updated.
# reboot
Install
pcs
,pacemaker
,fence agent
,resource agent
, andnmap-ncat
.# yum install -y pcs pacemaker fence-agents-azure-arm resource-agents nmap-ncat
2.4. Configuring HA services
Complete the following steps on all nodes.
Procedure
The user
hacluster
was created during thepcs
andpacemaker
installation in the previous section. Create a password forhacluster
on all cluster nodes. Use the same password for all nodes.# passwd hacluster
Add the
high availability
service to the RHEL Firewall iffirewalld.service
is enabled.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Start the
pcs
service and enable it to start on boot.# systemctl enable pcsd.service --now
Verification step
Ensure the
pcs
service is running.# systemctl is-active pcsd.service
2.5. Creating a cluster
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster
. Specify the name of each node in the cluster.# pcs host auth _hostname1_ _hostname2_ _hostname3_
Example:
[root@node01 clouduser]# pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized
Create the cluster.
# pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_
Example:
[root@node01 clouduser]# pcs cluster setup --name newcluster node01 node02 node03 ...omitted Synchronizing pcsd certificates on nodes node01, node02, node03... node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates... node02: Success node03: Success node01: Success
Verification steps
Enable the cluster.
# pcs cluster enable --all
Start the cluster.
# pcs cluster start --all
Example:
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled [root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...
2.6. Creating a fence device
Complete the following steps to configure fencing from any node in the cluster.
Procedure
Identify the available instances that can be fenced.
# fence_azure_arm -l [appid] -p [authkey] --resourceGroup=[name] --subscriptionId=[name] --tenantId=[name] -o list
Example:
[root@node1 ~]# fence_azure_arm -l XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX -p XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --resourceGroup=hacluster-rg --subscriptionId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --tenantId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX -o list node01-vm, node02-vm, node03-vm,
Create a fence device. Use the
pcmk_host_map
command to map the RHEL host name to the instance ID.# pcs stonith create _clusterfence_ fence_azure_arm login=_AD-Application-ID_ passwd=_AD-passwd_ pcmk_host_map="_pcmk-host-map_ resourcegroup= _myresourcegroup_ tenantid=_tenantid_ subscriptionid=_subscriptionid_
Verification steps
Test the fencing agent for one of the other nodes.
# pcs stonith fence _azurenodename_
Example:
[root@node01 ~]# pcs stonith fence fenceazure Resource: fenceazure (class=stonith type=fence_azure_arm) Attributes: login=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX passwd=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX pcmk_host_map=nodea:nodea-vm;nodeb:nodeb-vm;nodec:nodec-vm pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 resourceGroup=rg subscriptionId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX tenantId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX Operations: monitor interval=60s (fenceazure-monitor-interval-60s) [root@node01 ~]# pcs stonith fenceazure (stonith:fence_azure_arm): Started nodea
Check the status to verify the node started.
# watch pcs status
Example:
[root@node01 ~]# watch pcs status fenceazure (stonith:fence_azure_arm): Started nodea
Additional resources
2.7. Creating an Azure internal load balancer
The Azure internal load balancer removes cluster nodes that do not answer health probe requests.
Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA.
Prerequisites
Access to the Azure control panel
Procedure
- Create a basic load balancer. Select Internal load balancer, the Basic SKU, and Dynamic for the type of IP address assignment.
- Create a backend address pool. Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations.
- Create a health probe. For the health probe, select TCP and enter port 61000. You can use a TCP port number that does not interfere with another service. For certain HA product applications, for example, SAP HANA and SQL Server, you may need to work with Microsoft to identify the correct port to use.
- Create a load balancer rule. To create the load balancing rule, use the default values that are prepopulated. Ensure to set Floating IP (direct server return) to Enabled.
2.8. Configuring the Azure load balancer resource agent
After you have created the health probe, you must configure the load balancer
resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests.
Procedure
Enter the
Azure id
command to view the Azure load balancer resource agent description. This shows the options and default operations for this agent.# pcs resource describe _azure-id_
Create an
Ipaddr2
resource for managing the IP on the node.# pcs resource create _resource-id_ IPaddr2 ip=_virtual/floating-ip_ cidr_netmask=_virtual/floating-mask_ --group _group-id_ nic=_network-interface_ op monitor interval=30s
Example:
[root@node01 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.66.99 cidr_netmask=24 --group CloudIP nic=eth0 op monitor interval=30s
Configure the
load balancer
resource agent.# pcs resource create _resource-loadbalancer-name_ azure-lb port=_port-number_ --group _cluster-resources-group_
Verification step
Run the
pcs status
command to see the results.[root@node01 clouduser]# pcs status
Example:
[root@node01 ~]# pcs status Cluster name: hacluster WARNINGS: No stonith devices and stonith-enabled is not false Stack: corosync Current DC: nodeb (version 1.1.22-1.el7-63d2d79005) - partition with quorum Last updated: Wed Sep 9 16:47:07 2020 Last change: Wed Sep 9 16:44:32 2020 by hacluster via crmd on nodeb 3 nodes configured 0 resource instances configured Online: [ node01 node02 node03 ] No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Additional resources