Questo contenuto non è disponibile nella lingua selezionata.
Chapter 4. Configuring a Red Hat High Availability cluster on Microsoft Azure
To create a cluster where RHEL nodes automatically redistribute their workloads if a node failure occurs, use the Red Hat High Availability Add-On. Such high availability (HA) clusters can also be hosted on public cloud platforms, including Microsoft Azure. Creating RHEL HA clusters on Azure is similar to creating HA clusters in non-cloud environments, with certain specifics.
To configure a Red Hat HA cluster on Azure using Azure virtual machine (VM) instances as cluster nodes, see the following sections. The procedures in these sections assume that you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 8 images you use for your cluster. See Red Hat Enterprise Linux Image Options on Azure for information on image options for Azure.
The following sections provide:
- Prerequisite procedures for setting up your environment for Azure. After you set up your environment, you can create and configure Azure VM instances.
- Procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for a Microsoft Azure account with administrator privileges.
- You need to install the Azure command-line interface (CLI). For more information, see Installing the Azure CLI.
4.1. The benefits of using high-availability clusters on public cloud platforms Copia collegamentoCollegamento copiato negli appunti!
A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
4.2. Creating resources in Azure Copia collegamentoCollegamento copiato negli appunti!
Complete the following procedure to create a region, resource group, storage account, virtual network, and availability set. You need these resources to set up a cluster on Microsoft Azure.
Procedure
Authenticate your system with Azure and log in.
az login
$ az loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a browser is available in your environment, the CLI opens your browser to the Azure sign-in page.
Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a resource group in an Azure region.
az group create --name resource-group --location azure-region
$ az group create --name resource-group --location azure-regionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage account.
az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2
$ az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the storage account connection string.
az storage account show-connection-string -n storage-account-name -g resource-group
$ az storage account show-connection-string -n storage-account-name -g resource-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp
[clouduser@localhost]$ az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account.
export AZURE_STORAGE_CONNECTION_STRING="storage-connection-string"
$ export AZURE_STORAGE_CONNECTION_STRING="storage-connection-string"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
[clouduser@localhost]$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the storage container.
az storage container create -n container-name
$ az storage container create -n container-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
az storage container create -n azrhelclistcont
[clouduser@localhost]$ az storage container create -n azrhelclistcont { "created": true }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a virtual network. All cluster nodes must be in the same virtual network.
az network vnet create -g resource group --name vnet-name --subnet-name subnet-name
$ az network vnet create -g resource group --name vnet-name --subnet-name subnet-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an availability set. All cluster nodes must be in the same availability set.
az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroup
$ az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Required system packages for High Availability Copia collegamentoCollegamento copiato negli appunti!
The procedure assumes you are creating a VM image for Azure HA that uses Red Hat Enterprise Linux. To successfully complete the procedure, the following packages must be installed.
| Package | Repository | Description |
|---|---|---|
| libvirt | rhel-8-for-x86_64-appstream-rpms | Open source API, daemon, and management tool for managing platform virtualization |
| virt-install | rhel-8-for-x86_64-appstream-rpms | A command-line utility for building VMs |
| libguestfs | rhel-8-for-x86_64-appstream-rpms | A library for accessing and modifying VM file systems |
| libguestfs-tools | rhel-8-for-x86_64-appstream-rpms |
System administration tools for VMs; includes the |
4.4. Azure VM configuration settings Copia collegamentoCollegamento copiato negli appunti!
Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures. Refer to them as necessary.
| Setting | Recommendation |
|---|---|
| ssh | ssh must be enabled to provide remote access to your Azure VMs. |
| dhcp | The primary virtual adapter should be configured for dhcp (IPv4 only). |
| Swap Space | Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). |
| NIC | Choose virtio for the primary virtual network adapter. |
| encryption | For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. |
4.5. Installing Hyper-V device drivers Copia collegamentoCollegamento copiato negli appunti!
Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure virtual machine (VM). Use the lsinitrd | grep hv command to verify that the drivers are installed.
Procedure
Enter the following
grepcommand to determine if the required Hyper-V device drivers are installed.lsinitrd | grep hv
# lsinitrd | grep hvCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example below, all required drivers are installed.
lsinitrd | grep hv
# lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xzCopy to Clipboard Copied! Toggle word wrap Toggle overflow If all the drivers are not installed, complete the remaining steps.
NoteAn
hv_vmbusdriver may exist in the environment. Even if this driver is present, complete the following steps.-
Create a file named
hv.confin/etc/dracut.conf.d. Add the following driver parameters to the
hv.conffile.add_drivers+=" hv_vmbus " add_drivers+=" hv_netvsc " add_drivers+=" hv_storvsc " add_drivers+=" nvme "
add_drivers+=" hv_vmbus " add_drivers+=" hv_netvsc " add_drivers+=" hv_storvsc " add_drivers+=" nvme "Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNote the spaces before and after the quotes, for example,
add_drivers+=" hv_vmbus ". This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment.Regenerate the
initramfsimage.dracut -f -v --regenerate-all
# dracut -f -v --regenerate-allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Reboot the machine.
-
Run the
lsinitrd | grep hvcommand to verify that the drivers are installed.
4.6. Making configuration changes required for a Microsoft Azure deployment Copia collegamentoCollegamento copiato negli appunti!
Before you deploy your custom base image to Azure, you must perform additional configuration changes to ensure that the virtual machine (VM) can properly operate in Azure.
Procedure
- Log in to the VM.
Register the VM, and enable the Red Hat Enterprise Linux 8 repository.
subscription-manager register
# subscription-manager register Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: SubscribedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
cloud-initandhyperv-daemonspackages are installed.yum install cloud-init hyperv-daemons -y
# yum install cloud-init hyperv-daemons -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create
cloud-initconfiguration files that are needed for integration with Azure services:To enable logging to the Hyper-V Data Exchange Service (KVP), create the
/etc/cloud/cloud.cfg.d/10-azure-kvp.cfgconfiguration file and add the following lines to that file.reporting: logging: type: log telemetry: type: hypervreporting: logging: type: log telemetry: type: hypervCopy to Clipboard Copied! Toggle word wrap Toggle overflow To add Azure as a datasource, create the
/etc/cloud/cloud.cfg.d/91-azure_datasource.cfgconfiguration file, and add the following lines to that file.datasource_list: [ Azure ] datasource: Azure: apply_network_config: Falsedatasource_list: [ Azure ] datasource: Azure: apply_network_config: FalseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To ensure that specific kernel modules are blocked from loading automatically, edit or create the
/etc/modprobe.d/blocklist.conffile and add the following lines to that file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify
udevnetwork device rules:Remove the following persistent network device rules if present.
rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules rm -f /etc/udev/rules.d/80-net-name-slot-rules
# rm -f /etc/udev/rules.d/70-persistent-net.rules # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules # rm -f /etc/udev/rules.d/80-net-name-slot-rulesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that Accelerated Networking on Azure works as intended, create a new network device rule
/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rulesand add the following line to it.SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the
sshdservice to start automatically.systemctl enable sshd systemctl is-enabled sshd
# systemctl enable sshd # systemctl is-enabled sshdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify kernel boot parameters:
Open the
/etc/default/grubfile, and ensure theGRUB_TIMEOUTline has the following value.GRUB_TIMEOUT=10
GRUB_TIMEOUT=10Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following options from the end of the
GRUB_CMDLINE_LINUXline if present.rhgb quiet
rhgb quietCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the
/etc/default/grubfile contains the following lines with all the specified options.GRUB_CMDLINE_LINUX="loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
GRUB_CMDLINE_LINUX="loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not plan to run your workloads on HDDs, add
elevator=noneto the end of theGRUB_CMDLINE_LINUXline.This sets the I/O scheduler to
none, which improves I/O performance when running workloads on SSDs.Regenerate the
grub.cfgfile.On a BIOS-based machine:
grub2-mkconfig -o /boot/grub2/grub.cfg
# grub2-mkconfig -o /boot/grub2/grub.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow On a UEFI-based machine:
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow If your system uses a non-default location for
grub.cfg, adjust the command accordingly.
Configure the Windows Azure Linux Agent (
WALinuxAgent):Install and enable the
WALinuxAgentpackage.yum install WALinuxAgent -y systemctl enable waagent
# yum install WALinuxAgent -y # systemctl enable waagentCopy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that a swap partition is not used in provisioned VMs, edit the following lines in the
/etc/waagent.conffile.Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n
Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=nCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Prepare the VM for Azure provisioning:
Unregister the VM from Red Hat Subscription Manager.
subscription-manager unregister
# subscription-manager unregisterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the existing provisioning details.
waagent -force -deprovision
# waagent -force -deprovisionCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command generates warnings, which are expected because Azure handles the provisioning of VMs automatically.
Clean the shell history and shut down the VM.
export HISTSIZE=0 poweroff
# export HISTSIZE=0 # poweroffCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Creating an Azure Active Directory application Copia collegamentoCollegamento copiato negli appunti!
Complete the following procedure to create an Azure Active Directory (AD) application. The Azure AD application authorizes and automates access for HA operations for all nodes in the cluster.
Prerequisites
- The Azure Command Line Interface (CLI) is installed on your system.
- You are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application.
Procedure
On any node in the HA cluster, log in to your Azure account.
az login
$ az loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
jsonconfiguration file for a custom role for the Azure fence agent. Use the following configuration, but replace <subscription-id> with your subscription IDs.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the custom role for the Azure fence agent. Use the
jsonfile created in the previous step to do this.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the Azure web console interface, select Virtual Machine
Click Identity in the left-side menu. -
Select On
Click Save click Yes to confirm. -
Click Azure role assignments
Add role assignment. -
Select the Scope required for the role, for example
Resource Group. - Select the required Resource Group.
- Optional: Change the Subscription if necessary.
- Select the Linux Fence Agent Role role.
- Click Save.
Verification
Display nodes visible to Azure AD.
fence_azure_arm --msi -o list
# fence_azure_arm --msi -o list node1, node2, [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command outputs all nodes on your cluster, the AD application has been configured successfully.
4.8. Converting the image to a fixed VHD format Copia collegamentoCollegamento copiato negli appunti!
All Microsoft Azure VM images must be in a fixed VHD format. The image must be aligned on a 1 MB boundary before it is converted to VHD. To convert the image from qcow2 to a fixed VHD format and align the image, see the following procedure. Once you have converted the image, you can upload it to Azure.
Procedure
Convert the image from
qcow2torawformat.qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw
$ qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.rawCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a shell script with the following content.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script. This example uses the name
align.sh.sh align.sh <image-xxx>.raw
$ sh align.sh <image-xxx>.rawCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step.
- If a value displays, your image is not aligned.
Use the following command to convert the file to a fixed
VHDformat.The sample uses qemu-img version 2.12.0.
qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once converted, the
VHDfile is ready to upload to Azure.If the
rawimage is not aligned, complete the following steps to align it.Resize the
rawfile by using the rounded value displayed when you ran the verification script.qemu-img resize -f raw <image-xxx>.raw <rounded-value>
$ qemu-img resize -f raw <image-xxx>.raw <rounded-value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the
rawimage file to aVHDformat.The sample uses qemu-img version 2.12.0.
qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once converted, the
VHDfile is ready to upload to Azure.
4.9. Uploading and creating an Azure image Copia collegamentoCollegamento copiato negli appunti!
Complete the following steps to upload the VHD file to your container and create an Azure custom image.
The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again.
Procedure
Upload the
VHDfile to the storage container. It may take several minutes. To get a list of storage containers, enter theaz storage container listcommand.az storage blob upload \ --account-name <storage-account-name> --container-name <container-name> \ --type page --file <path-to-vhd> --name <image-name>.vhd$ az storage blob upload \ --account-name <storage-account-name> --container-name <container-name> \ --type page --file <path-to-vhd> --name <image-name>.vhdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
az storage blob upload \ --account-name azrhelclistact --container-name azrhelclistcont \ --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd[clouduser@localhost]$ az storage blob upload \ --account-name azrhelclistact --container-name azrhelclistcont \ --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the URL for the uploaded
VHDfile to use in the following step.az storage blob url -c <container-name> -n <image-name>.vhd
$ az storage blob url -c <container-name> -n <image-name>.vhdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
az storage blob url -c azrhelclistcont -n rhel-image-8.vhd "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"
$ az storage blob url -c azrhelclistcont -n rhel-image-8.vhd "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Azure custom image.
az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux
$ az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linuxCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option
--hyper-v-generation V2. Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information about generation 2 VMs.The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to
VHD.Example:
az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux
$ az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linuxCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10. Installing Red Hat HA packages and agents Copia collegamentoCollegamento copiato negli appunti!
Complete the following steps on all nodes.
Procedure
Launch an SSH terminal session and connect to the VM by using the administrator name and public IP address.
ssh administrator@PublicIP
$ ssh administrator@PublicIPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To get the public IP address for an Azure VM, open the VM properties in the Azure Portal or enter the following Azure CLI command.
az vm list -g <resource_group> -d --output table
$ az vm list -g <resource_group> -d --output tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
[clouduser@localhost ~] $ az vm list -g azrhelclirsgrp -d --output table Name ResourceGroup PowerState PublicIps Location ------ ---------------------- -------------- ------------- -------------- node01 azrhelclirsgrp VM running 192.98.152.251 southcentralus
[clouduser@localhost ~] $ az vm list -g azrhelclirsgrp -d --output table Name ResourceGroup PowerState PublicIps Location ------ ---------------------- -------------- ------------- -------------- node01 azrhelclirsgrp VM running 192.98.152.251 southcentralusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Register the VM with Red Hat.
sudo -i subscription-manager register
$ sudo -i # subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all repositories.
subscription-manager repos --disable=*
# subscription-manager repos --disable=*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the RHEL 8 Server HA repositories.
subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages.
yum update -y
# yum update -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Red Hat High Availability Add-On software packages, along with the Azure fencing agent from the High Availability channel.
yum install pcs pacemaker fence-agents-azure-arm
# yum install pcs pacemaker fence-agents-azure-armCopy to Clipboard Copied! Toggle word wrap Toggle overflow The user
haclusterwas created during the pcs and pacemaker installation in the last step. Create a password forhaclusteron all cluster nodes. Use the same password for all nodes.passwd hacluster
# passwd haclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
high availabilityservice to the RHEL Firewall iffirewalld.serviceis installed.firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
pcsservice and enable it to start on boot.systemctl start pcsd.service systemctl enable pcsd.service
# systemctl start pcsd.service # systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure the
pcsservice is running.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11. Creating a cluster Copia collegamentoCollegamento copiato negli appunti!
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster. In the command, specify the name of each node in the cluster.pcs host auth <hostname1> <hostname2> <hostname3>
# pcs host auth <hostname1> <hostname2> <hostname3>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster.
pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>
# pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enable the cluster.
pcs cluster enable --all
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster EnabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the cluster.
pcs cluster start --all
[root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12. Fencing overview Copia collegamentoCollegamento copiato negli appunti!
If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent.
A node that is unresponsive may still be accessing data. The only way to be certain that your data is safe is to fence the node by using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head," and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node.
4.13. Creating a fencing device Copia collegamentoCollegamento copiato negli appunti!
Complete the following steps to configure fencing. Complete these commands from any node in the cluster
Prerequisites
You need to set the cluster property stonith-enabled to true.
Procedure
Identify the Azure node name for each RHEL VM. You use the Azure node names to configure the fence device.
fence_azure_arm \ -l <AD-Application-ID> -p <AD-Password> \ --resourceGroup <MyResourceGroup> --tenantId <Tenant-ID> \ --subscriptionId <Subscription-ID> -o list# fence_azure_arm \ -l <AD-Application-ID> -p <AD-Password> \ --resourceGroup <MyResourceGroup> --tenantId <Tenant-ID> \ --subscriptionId <Subscription-ID> -o listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the options for the Azure ARM STONITH agent.
pcs stonith describe fence_azure_arm
# pcs stonith describe fence_azure_armCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs stonith describe fence_apc
# pcs stonith describe fence_apc Stonith options: password: Authentication key password_script: Script to run to retrieve passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningFor fence agents that provide a method option, do not specify a value of cycle as it is not supported and can cause data corruption.
Some fence devices can fence only a single node, while other devices can fence multiple nodes. The parameters you specify when you create a fencing device depend on what your fencing device supports and requires.
You can use the
pcmk_host_listparameter when creating a fencing device to specify all of the machines that are controlled by that fencing device.You can use
pcmk_host_mapparameter when creating a fencing device to map host names to the specifications that comprehends the fence device.Create a fence device.
pcs stonith create clusterfence fence_azure_arm
# pcs stonith create clusterfence fence_azure_armCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.
Verification
Test the fencing agent for one of the other nodes.
pcs stonith fence azurenodename
# pcs stonith fence azurenodenameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the node that was fenced in the previous step.
pcs cluster start <hostname>
# pcs cluster start <hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify the node started.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.14. Creating an Azure internal load balancer Copia collegamentoCollegamento copiato negli appunti!
The Azure internal load balancer removes cluster nodes that do not answer health probe requests.
Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA.
Prerequisites
Procedure
- Create a Basic load balancer. Select Internal load balancer, the Basic SKU, and Dynamic for the type of IP address assignment.
- Create a back-end address pool. Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations.
- Create a health probe. For the health probe, select TCP and enter port 61000. You can use TCP port number that does not interfere with another service. For certain HA product applications (for example, SAP HANA and SQL Server), you may need to work with Microsoft to identify the correct port to use.
- Create a load balancer rule. To create the load balancing rule, the default values are prepopulated. Ensure to set Floating IP (direct server return) to Enabled.
4.15. Configuring the load balancer resource agent Copia collegamentoCollegamento copiato negli appunti!
After you have created the health probe, you must configure the load balancer resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests.
Procedure
Install the
nmap-ncatresource agents on all nodes.yum install nmap-ncat resource-agents
# yum install nmap-ncat resource-agentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform the following steps on a single node.
Create the
pcsresources and group. Use your load balancer FrontendIP for the IPaddr2 address.pcs resource create resource-name IPaddr2 ip="10.0.0.7" --group cluster-resources-group
# pcs resource create resource-name IPaddr2 ip="10.0.0.7" --group cluster-resources-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
load balancerresource agent.pcs resource create resource-loadbalancer-name azure-lb port=port-number --group cluster-resources-group
# pcs resource create resource-loadbalancer-name azure-lb port=port-number --group cluster-resources-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run
pcs statusto see the results.pcs status
[root@node01 clouduser]# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow