此内容没有您所选择的语言版本。
Chapter 2. Before Configuring a Red Hat Cluster
This chapter describes tasks to perform and considerations to make before installing and configuring a Red Hat Cluster, and consists of the following sections.
Important
Make sure that your deployment of Red Hat Cluster Suite meets your needs and can be supported. Consult with an authorized Red Hat representative to verify Cluster Suite and GFS configuration prior to deployment. In addition, allow time for a configuration burn-in period to test failure modes.
2.1. General Configuration Considerations
You can configure a Red Hat Cluster in a variety of ways to suit your needs. Take into account the following general considerations when you plan, configure, and implement your Red Hat Cluster.
- Number of cluster nodes supported
- The maximum number of nodes supported in a Red Hat Cluster is 16.
- GFS/GFS2
- Although a GFS/GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the RHEL 5.5 release and later, Red Hat does not support the use of GFS/GFS2 as a single-node file system. Red Hat does support a number of high-performance single-node file systems that are optimized for single node, and thus have generally lower overhead than a cluster file system. Red Hat recommends using those file systems in preference to GFS/GFS2 in cases where only a single node needs to mount the file system. Red Hat will continue to support single-node GFS/GFS2 file systems for existing customers.When you configure a GFS/GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared file system. Asymmetric cluster configurations in which some nodes have access to the file system and others do not are not supported.This does not require that all nodes actually mount the GFS/GFS2 file system itself.
- No-single-point-of-failure hardware configuration
- Clusters can include a dual-controller RAID array, multiple bonded network channels, multiple paths between cluster members and storage, and redundant un-interruptible power supply (UPS) systems to ensure that no single failure results in application down time or loss of data.Alternatively, a low-cost cluster can be set up to provide less availability than a no-single-point-of-failure cluster. For example, you can set up a cluster with a single-controller RAID array and only a single Ethernet channel.Certain low-cost alternatives, such as host RAID controllers, software RAID without cluster support, and multi-initiator parallel SCSI configurations are not compatible or appropriate for use as shared cluster storage.
- Data integrity assurance
- To ensure data integrity, only one node can run a cluster service and access cluster-service data at a time. The use of power switches in the cluster hardware configuration enables a node to power-cycle another node before restarting that node's HA services during a failover process. This prevents two nodes from simultaneously accessing the same data and corrupting it. It is strongly recommended that fence devices (hardware or software solutions that remotely power, shutdown, and reboot cluster nodes) are used to guarantee data integrity under all failure conditions. Watchdog timers provide an alternative way to to ensure correct operation of HA service failover.
- Ethernet channel bonding
- Cluster quorum and node health is determined by communication of messages among cluster nodes via Ethernet. In addition, cluster nodes use Ethernet for a variety of other critical cluster functions (for example, fencing). With Ethernet channel bonding, multiple Ethernet interfaces are configured to behave as one, reducing the risk of a single-point-of-failure in the typical switched Ethernet connection among cluster nodes and other cluster hardware.Red Hat Enterprise Linux 5 supports bonding mode 1 only. It is recommended that you wire each node's slaves to the switches in a consistent manner, with each node's primary device wired to switch 1 and each node's backup device wired to switch 2.