이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Appendix B. Cluster Creation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7


Configuring a Red Hat High Availability Cluster in Red Hat Enterprise Linux 7 with Pacemaker requires a different set of configuration tools with a different administrative interface than configuring a cluster in Red Hat Enterprise Linux 6 with rgmanager. Section B.1, “Cluster Creation with rgmanager and with Pacemaker” summarizes the configuration differences between the various cluster components.
Red Hat Enterprise Linux 6.5 and later releases support cluster configuration with Pacemaker, using the pcs configuration tool. Section B.2, “Pacemaker Installation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7” summarizes the Pacemaker installation differences between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7.

B.1. Cluster Creation with rgmanager and with Pacemaker

Table B.1, “Comparison of Cluster Configuration with rgmanager and with Pacemaker” provides a comparative summary of how you configure the components of a cluster with rgmanager in Red Hat Enterprise Linux 6 and with Pacemaker in Red Hat Enterprise Linux 7.
Table B.1. Comparison of Cluster Configuration with rgmanager and with Pacemaker
Configuration ComponentrgmanagerPacemaker
Cluster configuration file
The cluster configuration file on each node is cluster.conf file, which can can be edited directly. Otherwise, use the luci or ccs interface to define the cluster configuration.
The cluster and Pacemaker configuration files are corosync.conf and cib.xml. Do not edit the cib.xml file directly; use the pcs or pcsd interface instead.
Network setup
Configure IP addresses and SSH before configuring the cluster.
Configure IP addresses and SSH before configuring the cluster.
Cluster Configuration Tools
luci, ccs command, manual editing of cluster.conf file.
pcs or pcsd.
Installation
Install rgmanager (which pulls in all dependencies, including ricci, luci, and the resource and fencing agents). If needed, install lvm2-cluster and gfs2-utils.
Install pcs, and the fencing agents you require. If needed, install lvm2-cluster and gfs2-utils.
Starting cluster services
Start and enable cluster services with the following procedure:
  1. Start rgmanager, cman, and, if needed, clvmd and gfs2.
  2. Start ricci, and start luci if using the luci interface.
  3. Run chkconfig on for the needed services so that they start at each runtime.
Alternately, you can enter ccs --start to start and enable the cluster services.
Start and enable cluster services with the following procedure:
  1. On every node, execute systemctl start pcsd.service, then systemctl enable pcsd.service to enable pcsd to start at runtime.
  2. On one node in the cluster, enter pcs cluster start --all to start corosync and pacemaker.
Controlling access to configuration tools
For luci, the root user or a user with luci permissions can access luci. All access requires the ricci password for the node.
The pcsd gui requires that you authenticate as user hacluster, which is the common system user. The root user can set the password for hacluster.
Cluster creation
Name the cluster and define which nodes to include in the cluster with luci or ccs, or directly edit the cluster.conf file.
Name the cluster and include nodes with pcs cluster setup command or with the pcsd Web UI. You can add nodes to an existing cluster with the pcs cluster node add command or with the pcsd Web UI.
Propagating cluster configuration to all nodes
When configuration a cluster with luci, propagation is automatic. With ccs, use the --sync option. You can also use the cman_tool version -r command.
Propagation of the cluster and Pacemaker configuration files, corosync.conf and cib.xml, is automatic on cluster setup or when adding a node or resource.
Global cluster properties
The following feature are supported with rgmanager in Red Hat Enterprise Linux 6:
* You can configure the system so that the system chooses which multicast address to use for IP multicasting in the cluster network.
* If IP multicasting is not available, you can use UDP Unicast transport mechanism.
* You can configure a cluster to use RRP protocol.
Pacemaker in Red Hat Enterprise Linux 7 supports the following features for a cluster:
* You can set no-quorum-policy for the cluster to specify what the system should do when the cluster does not have quorum.
* For additional cluster properties you can set, see Table 12.1, “Cluster Properties”.
Logging
You can set global and daemon-specific logging configuration.
See the file /etc/sysconfig/pacemaker for information on how to configure logging manually.
Validating the cluster
Cluster validation is automatic with luci and with ccs, using the cluster schema. The cluster is automatically validated on startup.
The cluster is automatically validated on startup, or you can validate the cluster with pcs cluster verify.
Quorum in two-node clusters
With a two-node cluster, you can configure how the system determines quorum:
* Configure a quorum disk
* Use ccs or edit the cluster.conf file to set two_node=1 and expected_votes=1 to allow a single node to maintain quorum.
pcs automatically adds the necessary options for a two-node cluster to corosync.
Cluster status
On luci, the current status of the cluster is visible in the various components of the interface, which can be refreshed. You can use the --getconf option of the ccs command to see current the configuration file. You can use the clustat command to display cluster status.
You can display the current cluster status with the pcs status command.
Resources
You add resources of defined types and configure resource-specific properties with luci or the ccs command, or by editing the cluster.conf configuration file.
You add resources of defined types and configure resource-specific properties with the pcs resource create command or with the pcsd Web UI. For general information on configuring cluster resources with Pacemaker see Chapter 6, Configuring Cluster Resources.
Resource behavior, grouping, and start/stop order
Define cluster services to configure how resources interact.
With Pacemaker, you use resource groups as a shorthand method of defining a set of resources that need to be located together and started and stopped sequentially. In addition, you define how resources behave and interact in the following ways:
* You set some aspects of resource behavior as resource options.
* You use location constraints to determine which nodes a resource can run on.
* You use order constraints to determine the order in which resources run.
* You use colocation constraints to determine that the location of one resource depends on the location of another resource.
For more complete information on these topics, see Chapter 6, Configuring Cluster Resources and Chapter 7, Resource Constraints.
Resource administration: Moving, starting, stopping resources
With luci, you can manage clusters, individual cluster nodes, and cluster services. With the ccs command, you can manage cluster. You can use the clusvadm to manage cluster services.
You can temporarily disable a node so that it cannot host resources with the pcs cluster standby command, which causes the resources to migrate. You can stop a resource with the pcs resource disable command.
Removing a cluster configuration completely
With luci, you can select all nodes in a cluster for deletion to delete a cluster entirely. You can also remove the cluster.conf from each node in the cluster.
You can remove a cluster configuration with the pcs cluster destroy command.
Resources active on multiple nodes, resources active on multiple nodes in multiple modes
No equivalent.
With Pacemaker, you can clone resources so that they can run in multiple nodes, and you can define cloned resources as master and slave resources so that they can run in multiple modes. For information on cloned resources and master/slave resources, see Chapter 9, Advanced Configuration.
Fencing -- single fence device per node
Create fencing devices globally or locally and add them to nodes. You can define post-fail delay and post-join delay values for the cluster as a whole.
Create a fencing device for each node with the pcs stonith create command or with the pcsd Web UI. For devices that can fence multiple nodes, you need to define them only once rather than separately for each node. You can also define pcmk_host_map to configure fencing devices for all nodes with a single command; for information on pcmk_host_map see Table 5.1, “General Properties of Fencing Devices”. You can define the stonith-timeout value for the cluster as a whole.
Multiple (backup) fencing devices per node
Define backup devices with luci or the ccs command, or by editing the cluster.conf file directly.
Configure fencing levels.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.