8.2. Creating a Basic Cluster Configuration File


Provided that cluster hardware, Red Hat Enterprise Linux, and High Availability Add-On software are installed, you can create a cluster configuration file (/etc/cluster/cluster.conf) and start running the High Availability Add-On. As a starting point only, this section describes how to create a skeleton cluster configuration file without fencing, failover domains, and HA services. Subsequent sections describe how to configure those parts of the configuration file.

Important

This is just an interim step to create a cluster configuration file; the resultant file does not have any fencing and is not considered to be a supported configuration.
The following steps describe how to create and configure a skeleton cluster configuration file. Ultimately, the configuration file for your cluster will vary according to the number of nodes, the type of fencing, the type and number of HA services, and other site-specific requirements.
  1. At any node in the cluster, create /etc/cluster/cluster.conf, using the template of the example in Example 8.1, “cluster.conf Sample: Basic Configuration”.
  2. (Optional) If you are configuring a two-node cluster, you can add the following line to the configuration file to allow a single node to maintain quorum (for example, if one node fails):
    <cman two_node="1" expected_votes="1"/>
    When you add or remove the two_node option from the cluster.conf file, you must restart the cluster for this change to take effect when you update the configuration. For information on updating a cluster configuration, see Section 9.4, “Updating a Configuration”. For an example of specifying the two_node option, see Example 8.2, “cluster.conf Sample: Basic Two-Node Configuration”.
  3. Specify the cluster name and the configuration version number using the cluster attributes: name and config_version (see Example 8.1, “cluster.conf Sample: Basic Configuration” or Example 8.2, “cluster.conf Sample: Basic Two-Node Configuration”).
  4. In the clusternodes section, specify the node name and the node ID of each node using the clusternode attributes: name and nodeid. The node name can be up to 255 bytes in length.
  5. Save /etc/cluster/cluster.conf.
  6. Validate the file against the cluster schema (cluster.rng) by running the ccs_config_validate command. For example:
    [root@example-01 ~]# ccs_config_validate 
    Configuration validates
    
  7. Propagate the configuration file to /etc/cluster/ in each cluster node. For example, you could propagate the file to other cluster nodes using the scp command.

    Note

    Propagating the cluster configuration file this way is necessary the first time a cluster is created. Once a cluster is installed and running, the cluster configuration file can be propagated using the cman_tool version -r command. It is possible to use the scp command to propagate an updated configuration file; however, the cluster software must be stopped on all nodes while using the scp command. In addition, you should run ccs_config_validate if you propagate an updated configuration file by means of the scp command.

    Note

    While there are other elements and attributes present in the sample configuration file (for example, fence and fencedevices), there is no need to populate them now. Subsequent procedures in this chapter provide information about specifying other elements and attributes.
  8. Start the cluster. At each cluster node enter the following command:
    service cman start
    For example:
    [root@example-01 ~]# service cman start
    Starting cluster: 
       Checking Network Manager...                             [  OK  ]
       Global setup...                                         [  OK  ]
       Loading kernel modules...                               [  OK  ]
       Mounting configfs...                                    [  OK  ]
       Starting cman...                                        [  OK  ]
       Waiting for quorum...                                   [  OK  ]
       Starting fenced...                                      [  OK  ]
       Starting dlm_controld...                                [  OK  ]
       Starting gfs_controld...                                [  OK  ]
       Unfencing self...                                       [  OK  ]
       Joining fence domain...                                 [  OK  ]
    
  9. At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
    [root@example-01 ~]# cman_tool nodes
    Node  Sts   Inc   Joined               Name
       1   M    548   2010-09-28 10:52:21  node-01.example.com
       2   M    548   2010-09-28 10:52:21  node-02.example.com
       3   M    544   2010-09-28 10:52:21  node-03.example.com
    
  10. If the cluster is running, proceed to Section 8.3, “Configuring Fencing”.

Basic Configuration Examples

Example 8.1, “cluster.conf Sample: Basic Configuration” and Example 8.2, “cluster.conf Sample: Basic Two-Node Configuration” (for a two-node cluster) each provide a very basic sample cluster configuration file as a starting point. Subsequent procedures in this chapter provide information about configuring fencing and HA services.

Example 8.1. cluster.conf Sample: Basic Configuration


<cluster name="mycluster" config_version="2">
   <clusternodes>
     <clusternode name="node-01.example.com" nodeid="1">
         <fence>
         </fence>
     </clusternode>
     <clusternode name="node-02.example.com" nodeid="2">
         <fence>
         </fence>
     </clusternode>
     <clusternode name="node-03.example.com" nodeid="3">
         <fence>
         </fence>
     </clusternode>
   </clusternodes>
   <fencedevices>
   </fencedevices>
   <rm>
   </rm>
</cluster>

Example 8.2. cluster.conf Sample: Basic Two-Node Configuration


<cluster name="mycluster" config_version="2">
   <cman two_node="1" expected_votes="1"/>
   <clusternodes>
     <clusternode name="node-01.example.com" nodeid="1">
         <fence>
         </fence>
     </clusternode>
     <clusternode name="node-02.example.com" nodeid="2">
         <fence>
         </fence>
     </clusternode>
   </clusternodes>
   <fencedevices>
   </fencedevices>
   <rm>
   </rm>
</cluster>

The consensus Value for totem in a Two-Node Cluster

When you create a two-node cluster and you do not intend to add additional nodes to the cluster at a later time, then you should omit the consensus value in the totem tag in the cluster.conf file so that the consensus value is calculated automatically. When the consensus value is calculated automatically, the following rules are used:
  • If there are two nodes or fewer, the consensus value will be (token * 0.2), with a ceiling of 2000 msec and a floor of 200 msec.
  • If there are three or more nodes, the consensus value will be (token + 2000 msec)
If you let the cman utility configure your consensus timeout in this fashion, then moving at a later time from two to three (or more) nodes will require a cluster restart, since the consensus timeout will need to change to the larger value based on the token timeout.
If you are configuring a two-node cluster and intend to upgrade in the future to more than two nodes, you can override the consensus timeout so that a cluster restart is not required when moving from two to three (or more) nodes. This can be done in the cluster.conf as follows:

<totem token="X" consensus="X + 2000" />
Note that the configuration parser does not calculate X + 2000 automatically. An integer value must be used rather than an equation.
The advantage of using the optimized consensus timeout for two-node clusters is that overall failover time is reduced for the two-node case, since consensus is not a function of the token timeout.
Note that for two-node autodetection in cman, the number of physical nodes is what matters and not the presence of the two_node=1 directive in the cluster.conf file.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.