Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Configuring Red Hat High Availability Clusters on Google Cloud Platform
This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Google Cloud Platform (GCP) using Google Compute Engine (GCE) virtual machine (VM) instances as cluster nodes.
The chapter includes prerequisite procedures for setting up your environment for GCP. Once you have set up your environment, you can create and configure GCP VM instances.
The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on GCP. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing GCP network resource agents.
The chapter refers to GCP documentation in a number of places. For more information, see the referenced GCP documentation.
Prerequisites
- You need to install the GCP software development kit (SDK). For more information see, Installing the Google cloud SDK.
- Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat Subscription from physical or on-premise systems onto GCP with full support from Red Hat.
- You must belong to an active GCP project and have sufficient permissions to create resources in the project.
- Your project should have a service account that belongs to a VM instance and not an individual user. See Using the Compute Engine Default Service Account for information about using the default service account instead of creating a separate service account.
If you or your project administrator create a custom service account, the service account should be configured for the following roles.
- Cloud Trace Agent
- Compute Admin
- Compute Network Admin
- Cloud Datastore User
- Logging Admin
- Monitoring Editor
- Monitoring Metric Writer
- Service Account Administrator
- Storage Admin
Additional resources
- Support Policies for RHEL High Availability Clusters - Google Cloud Platform Virtual Machines as Cluster Members
- Support Policies for RHEL High Availability clusters - Transport Protocols
- VPC network overview
- Exploring RHEL High Availability’s Components, Concepts, and Features - Overview of Transport Protocols
- Design Guidance for RHEL High Availability Clusters - Selecting the Transport Protocol
- Quickstart for Red Hat and Centos
6.1. Red Hat Enterprise Linux image options on GCP Link kopierenLink in die Zwischenablage kopiert!
The following table lists image choices and the differences in the image options.
| Image option | Subscriptions | Sample scenario | Considerations |
|---|---|---|---|
| Choose to deploy a custom image that you move to GCP. | Leverage your existing Red Hat subscriptions. | Enable subscriptions through the Red Hat Cloud Access program, upload your custom image, and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay all other instance costs. Custom images that you move to GCP are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images. |
| Choose to deploy an existing GCP image that includes RHEL. | The GCP images include a Red Hat product. | Choose a RHEL image when you launch an instance on the GCP Compute Engine, or choose an image from the Google Cloud Platform Marketplace. | You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. |
You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access bring-your-own subscription (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing.
The remainder of this chapter includes information and procedures pertaining to custom images.
6.2. Required system packages Link kopierenLink in die Zwischenablage kopiert!
The procedures in this chapter assume you are using a host system running Red Hat Enterprise Linux. To successfully complete the procedures, your host system must have the following packages installed.
| Package | Description | Command |
|---|---|---|
| qemu-kvm | This package provides the user-level KVM emulator and facilitates communication between hosts and guest VMs. |
|
| qemu-img | This package provides disk management for guest VMs. The qemu-img package is installed as a dependency of the qemu-kvm package. | |
| libvirt | This package provides the server and host-side libraries for interacting with hypervisors and host systems and the libvirtd daemon that handles the library calls, manages VMs, and controls the hypervisor. |
| Package | Description | Command |
|---|---|---|
| virt-install |
This package provides the |
|
| libvirt-python | This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. | |
| virt-manager | This package provides the virt-manager tool, also known as Virtual Machine Manager (VMM). VMM is a graphical tool for administering VMs. It uses the libvirt-client library as the management API. | |
| libvirt-client |
This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the |
Additional resources
6.3. Installing the HA packages and agents Link kopierenLink in die Zwischenablage kopiert!
Complete the following steps on all nodes to install the High Availability packages and agents.
Procedure
Disable all repositories.
subscription-manager repos --disable=*
# subscription-manager repos --disable=*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable RHEL 7 server and RHEL 7 server HA repositories.
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages.
yum update -y
# yum update -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install
pcs pacemakerfence agent and resource agent.yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp
# yum install -y pcs pacemaker fence-agents-gce resource-agents-gcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the machine if the kernel is updated.
reboot
# rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Configuring HA services Link kopierenLink in die Zwischenablage kopiert!
Complete the following steps on all nodes to configure High Availability services.
Procedure
The user
haclusterwas created during thepcsandpacemakerinstallation in the previous step. Create a password for the userhaclusteron all cluster nodes. Use the same password for all nodes.passwd hacluster
# passwd haclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
firewalldservice is enabled, add the high availability service to RHEL.firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
pcsservice and enable it to start on boot.systemctl enable pcsd.service --now
# systemctl enable pcsd.service --nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Ensure the
pcsservice is running.systemctl is-active pcsd.service
# systemctl is-active pcsd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.5. Creating a cluster Link kopierenLink in die Zwischenablage kopiert!
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
ha cluster. Specify the name of each node in the cluster.pcs cluster auth _hostname1_ _hostname2_ _hostname3_ -u hacluster
# pcs cluster auth _hostname1_ _hostname2_ _hostname3_ -u haclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs cluster auth node01 node02 node03 -u hacluster
[root@node01 ~]# pcs cluster auth node01 node02 node03 -u hacluster node01: Authorized node02: Authorized node03: AuthorizedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster.
pcs cluster setup --name cluster-name _hostname1_ _hostname2_ _hostname3_
# pcs cluster setup --name cluster-name _hostname1_ _hostname2_ _hostname3_Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Enable the cluster.
pcs cluster enable --all
# pcs cluster enable --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the cluster.
pcs cluster start --all
# pcs cluster start --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6. Creating a fence device Link kopierenLink in die Zwischenablage kopiert!
For most default configurations, the GCP instance names and the RHEL host names are identical.
Complete the following steps to configure fencing from any node in the cluster.
Procedure
Get the GCP instance names from any node in the cluster. Note that the output also shows the internal ID for the instance.
fence_gce --zone _gcp_ _region_ --project= _gcp_ _project_ -o list
# fence_gce --zone _gcp_ _region_ --project= _gcp_ _project_ -o listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list
[root@rhel71-node-01 ~]# fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 44358**********3181,InstanceName-3 40819**********6811,InstanceName-1 71736**********3341,InstanceName-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a fence device. Use the
pcmk_host-namecommand to map the RHEL host name with the instance ID.pcs stonith create _clusterfence_ fence_gce pcmk_host_map=_pcmk-hpst-map_ fence_gce zone=_gcp-zone_ project=_gcpproject_
# pcs stonith create _clusterfence_ fence_gce pcmk_host_map=_pcmk-hpst-map_ fence_gce zone=_gcp-zone_ project=_gcpproject_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs stonith create fencegce fence_gce pcmk_host_map="node01:node01-vm;node02:node02-vm;node03:node03-vm" project=hacluster zone=us-east1-b
[root@node01 ~]# pcs stonith create fencegce fence_gce pcmk_host_map="node01:node01-vm;node02:node02-vm;node03:node03-vm" project=hacluster zone=us-east1-bCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Test the fencing agent for one of the other nodes.
pcs stonith fence gcp nodename
# pcs stonith fence gcp nodenameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify that the node is fenced.
watch pcs status
# watch pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Example:
6.7. Configuring GCP node authorization Link kopierenLink in die Zwischenablage kopiert!
Configure cloud SDK tools to use your account credentials to access GCP.
Procedure
Enter the following command on each node to initialize each node with your project ID and account credentials.
gcloud-ra init
# gcloud-ra init
6.8. Configuring the GCP network resource agent Link kopierenLink in die Zwischenablage kopiert!
The cluster uses GCP network resource agents attached to a secondary IP address (alias IP) to a running instance. This is a floating IP address that can be passed between different nodes in the cluster.
Procedure
Enter the following command to view the GCP virtual IP address resource agent (gcp-vpc-move-vip) description. This shows the options and default operations for this agent.
pcs resource describe gcp-vpc-move-vip
# pcs resource describe gcp-vpc-move-vip
You can configure the resource agent to use a primary subnet address range or a secondary subnet address range. This section includes procedures for both.
Primary subnet address range
Procedure
Complete the following steps to configure the resource for the primary VPC subnet.
Create the
aliasipresource. Include an unused internal IP address. Include the CIDR block in the command.pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_
# pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
IPaddr2resource for managing the IP on the node.pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_
# pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Group the network resources under
vipgrp.pcs resource group add vipgrp aliasip vip
# pcs resource group add vipgrp aliasip vipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify that the resources have started and are grouped under
vipgrp.pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resource can move to a different node.
pcs resource move vip _Node_
# pcs resource move vip _Node_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource move vip rhel71-node-03
# pcs resource move vip rhel71-node-03Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
vipsuccessfully started on a different node.pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Secondary subnet address range
Complete the following steps to configure the resource for a secondary subnet address range.
Prerequisites
Procedure
Create a secondary subnet address range.
gcloud-ra compute networks subnets update _SubnetName_ --region _RegionName_ --add-secondary-ranges _SecondarySubnetName_=_SecondarySubnetRange_
# gcloud-ra compute networks subnets update _SubnetName_ --region _RegionName_ --add-secondary-ranges _SecondarySubnetName_=_SecondarySubnetRange_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24
# gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
aliasipresource. Create an unused internal IP address in the secondary subnet address range. Include the CIDR block in the command.pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_
# pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
IPaddr2resource for managing the IP on the node.pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_
# pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify that the resources have started and are grouped under
vipgrp.pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the resource can move to a different node.
pcs resource move vip _Node_
# pcs resource move vip _Node_Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource move vip rhel71-node-03
[root@rhel71-node-01 ~]# pcs resource move vip rhel71-node-03Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
vipsuccessfully started on a different node.pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow