이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 6. Configuring Red Hat High Availability Clusters on Google Cloud Platform
This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Google Cloud Platform (GCP) using Google Compute Engine (GCE) virtual machine (VM) instances as cluster nodes.
The chapter includes prerequisite procedures for setting up your environment for GCP. Once you have set up your environment, you can create and configure GCP VM instances.
The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on GCP. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing GCP network resource agents.
The chapter refers to GCP documentation in a number of places. For more information, see the referenced GCP documentation.
Prerequisites
- You need to install the GCP software development kit (SDK). For more information see, Installing the Google cloud SDK.
- Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat Subscription from physical or on-premise systems onto GCP with full support from Red Hat.
- You must belong to an active GCP project and have sufficient permissions to create resources in the project.
- Your project should have a service account that belongs to a VM instance and not an individual user. See Using the Compute Engine Default Service Account for information about using the default service account instead of creating a separate service account.
If you or your project administrator create a custom service account, the service account should be configured for the following roles.
- Cloud Trace Agent
- Compute Admin
- Compute Network Admin
- Cloud Datastore User
- Logging Admin
- Monitoring Editor
- Monitoring Metric Writer
- Service Account Administrator
- Storage Admin
Additional resources
- Support Policies for RHEL High Availability Clusters - Google Cloud Platform Virtual Machines as Cluster Members
- Support Policies for RHEL High Availability clusters - Transport Protocols
- VPC network overview
- Exploring RHEL High Availability’s Components, Concepts, and Features - Overview of Transport Protocols
- Design Guidance for RHEL High Availability Clusters - Selecting the Transport Protocol
- Quickstart for Red Hat and Centos
6.1. Red Hat Enterprise Linux image options on GCP
The following table lists image choices and the differences in the image options.
Image option | Subscriptions | Sample scenario | Considerations |
---|---|---|---|
Choose to deploy a custom image that you move to GCP. | Leverage your existing Red Hat subscriptions. | Enable subscriptions through the Red Hat Cloud Access program, upload your custom image, and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay all other instance costs. Custom images that you move to GCP are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images. |
Choose to deploy an existing GCP image that includes RHEL. | The GCP images include a Red Hat product. | Choose a RHEL image when you launch an instance on the GCP Compute Engine, or choose an image from the Google Cloud Platform Marketplace. | You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. |
You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access bring-your-own subscription (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing.
The remainder of this chapter includes information and procedures pertaining to custom images.
6.2. Required system packages
The procedures in this chapter assume you are using a host system running Red Hat Enterprise Linux. To successfully complete the procedures, your host system must have the following packages installed.
Package | Description | Command |
---|---|---|
qemu-kvm | This package provides the user-level KVM emulator and facilitates communication between hosts and guest VMs. |
|
qemu-img | This package provides disk management for guest VMs. The qemu-img package is installed as a dependency of the qemu-kvm package. | |
libvirt | This package provides the server and host-side libraries for interacting with hypervisors and host systems and the libvirtd daemon that handles the library calls, manages VMs, and controls the hypervisor. |
Package | Description | Command |
---|---|---|
virt-install |
This package provides the |
|
libvirt-python | This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. | |
virt-manager | This package provides the virt-manager tool, also known as Virtual Machine Manager (VMM). VMM is a graphical tool for administering VMs. It uses the libvirt-client library as the management API. | |
libvirt-client |
This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the |
Additional resources
6.3. Installing the HA packages and agents
Complete the following steps on all nodes to install the High Availability packages and agents.
Procedure
Disable all repositories.
# subscription-manager repos --disable=*
Enable RHEL 7 server and RHEL 7 server HA repositories.
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
Update all packages.
# yum update -y
Install
pcs pacemaker
fence agent and resource agent.# yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp
Reboot the machine if the kernel is updated.
# reboot
6.4. Configuring HA services
Complete the following steps on all nodes to configure High Availability services.
Procedure
The user
hacluster
was created during thepcs
andpacemaker
installation in the previous step. Create a password for the userhacluster
on all cluster nodes. Use the same password for all nodes.# passwd hacluster
If the
firewalld
service is enabled, add the high availability service to RHEL.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Start the
pcs
service and enable it to start on boot.# systemctl enable pcsd.service --now
Verification steps
Ensure the
pcs
service is running.# systemctl is-active pcsd.service
6.5. Creating a cluster
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
ha cluster
. Specify the name of each node in the cluster.# pcs cluster auth _hostname1_ _hostname2_ _hostname3_ -u hacluster
Example:
[root@node01 ~]# pcs cluster auth node01 node02 node03 -u hacluster node01: Authorized node02: Authorized node03: Authorized
Create the cluster.
# pcs cluster setup --name cluster-name _hostname1_ _hostname2_ _hostname3_
Verification steps
Enable the cluster.
# pcs cluster enable --all
Start the cluster.
# pcs cluster start --all
6.6. Creating a fence device
For most default configurations, the GCP instance names and the RHEL host names are identical.
Complete the following steps to configure fencing from any node in the cluster.
Procedure
Get the GCP instance names from any node in the cluster. Note that the output also shows the internal ID for the instance.
# fence_gce --zone _gcp_ _region_ --project= _gcp_ _project_ -o list
Example:
[root@rhel71-node-01 ~]# fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 44358**********3181,InstanceName-3 40819**********6811,InstanceName-1 71736**********3341,InstanceName-2
Create a fence device. Use the
pcmk_host-name
command to map the RHEL host name with the instance ID.# pcs stonith create _clusterfence_ fence_gce pcmk_host_map=_pcmk-hpst-map_ fence_gce zone=_gcp-zone_ project=_gcpproject_
Example:
[root@node01 ~]# pcs stonith create fencegce fence_gce pcmk_host_map="node01:node01-vm;node02:node02-vm;node03:node03-vm" project=hacluster zone=us-east1-b
Verification steps
Test the fencing agent for one of the other nodes.
# pcs stonith fence gcp nodename
Check the status to verify that the node is fenced.
# watch pcs status
Example:
[root@node01 ~]# watch pcs status Cluster name: gcp-cluster Stack: corosync Current DC: rhel71-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 27 12:53:25 2018 Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel71-node-01 3 nodes configured 3 resources configured Online: [ rhel71-node-01 rhel71-node-02 rhel71-node-03 ] Full list of resources: us-east1-b-fence (stonith:fence_gce): Started rhel71-node-01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
6.7. Configuring GCP node authorization
Configure cloud SDK tools to use your account credentials to access GCP.
Procedure
Enter the following command on each node to initialize each node with your project ID and account credentials.
# gcloud-ra init
6.8. Configuring the GCP network resource agent
The cluster uses GCP network resource agents attached to a secondary IP address (alias IP) to a running instance. This is a floating IP address that can be passed between different nodes in the cluster.
Procedure
Enter the following command to view the GCP virtual IP address resource agent (gcp-vpc-move-vip) description. This shows the options and default operations for this agent.
# pcs resource describe gcp-vpc-move-vip
You can configure the resource agent to use a primary subnet address range or a secondary subnet address range. This section includes procedures for both.
Primary subnet address range
Procedure
Complete the following steps to configure the resource for the primary VPC subnet.
Create the
aliasip
resource. Include an unused internal IP address. Include the CIDR block in the command.# pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_
Create an
IPaddr2
resource for managing the IP on the node.# pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_
Group the network resources under
vipgrp
.# pcs resource group add vipgrp aliasip vip
Verification steps
Verify that the resources have started and are grouped under
vipgrp
.# pcs status
Verify that the resource can move to a different node.
# pcs resource move vip _Node_
Example:
# pcs resource move vip rhel71-node-03
Verify that the
vip
successfully started on a different node.# pcs status
Secondary subnet address range
Complete the following steps to configure the resource for a secondary subnet address range.
Prerequisites
Procedure
Create a secondary subnet address range.
# gcloud-ra compute networks subnets update _SubnetName_ --region _RegionName_ --add-secondary-ranges _SecondarySubnetName_=_SecondarySubnetRange_
Example:
# gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24
Create the
aliasip
resource. Create an unused internal IP address in the secondary subnet address range. Include the CIDR block in the command.# pcs resource create aliasip gcp-vpc-move-vip alias_ip=_UnusedIPaddress/CIDRblock_ --group _group-name_ --group _networking-group_
Create an
IPaddr2
resource for managing the IP on the node.# pcs resource create vip IPaddr2 nic=_interface_ ip=_AliasIPaddress_ cidr_netmask=32 --group _group-name_ --group _networking-group_
Verification steps
Verify that the resources have started and are grouped under
vipgrp
.# pcs status
Verify that the resource can move to a different node.
# pcs resource move vip _Node_
Example:
[root@rhel71-node-01 ~]# pcs resource move vip rhel71-node-03
Verify that the
vip
successfully started on a different node.# pcs status