Search

9.2.2. Adding a Node to a Cluster

download PDF
Adding a node to a cluster consists of updating the cluster configuration, propagating the updated configuration to the node to be added, and starting the cluster software on that node. To add a node to a cluster, perform the following steps:
  1. At any node in the cluster, edit the /etc/cluster/cluster.conf to add a clusternode section for the node that is to be added. For example, in Example 9.2, “Two-node Cluster Configuration”, if node-03.example.com is supposed to be added, then add a clusternode section for that node. If adding a node (or nodes) causes the cluster to transition from a two-node cluster to a cluster with three or more nodes, remove the following cman attributes from /etc/cluster/cluster.conf:
    • cman two_node="1"
    • expected_votes="1"
    Refer to Section 9.2.3, “Examples of Three-Node and Two-Node Configurations” for comparison between a three-node and a two-node configuration.
  2. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3">).
  3. Save /etc/cluster/cluster.conf.
  4. (Optional) Validate the updated file against the cluster schema (cluster.rng) by running the ccs_config_validate command. For example:
    [root@example-01 ~]# ccs_config_validate 
    Configuration validates
    
  5. Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes.
  6. Verify that the updated configuration file has been propagated.
  7. Propagate the updated configuration file to /etc/cluster/ in each node to be added to the cluster. For example, use the scp command to send the updated configuration file to each node to be added to the cluster.
  8. If the node count of the cluster has transitioned from two nodes to greater than two nodes, you must restart the cluster software in the existing cluster nodes as follows:
    1. At each node, stop the cluster software according to Section 9.1.2, “Stopping Cluster Software”. For example:
      [root@example-01 ~]# service rgmanager stop
      Stopping Cluster Service Manager:                          [  OK  ]
      [root@example-01 ~]# service gfs2 stop
      Unmounting GFS2 filesystem (/mnt/gfsA):                    [  OK  ]
      Unmounting GFS2 filesystem (/mnt/gfsB):                    [  OK  ]
      [root@example-01 ~]# service clvmd stop
      Signaling clvmd to exit                                    [  OK  ]
      clvmd terminated                                           [  OK  ]
      [root@example-01 ~]# service cman stop
      Stopping cluster: 
         Leaving fence domain...                                 [  OK  ]
         Stopping gfs_controld...                                [  OK  ]
         Stopping dlm_controld...                                [  OK  ]
         Stopping fenced...                                      [  OK  ]
         Stopping cman...                                        [  OK  ]
         Waiting for corosync to shutdown:                       [  OK  ]
         Unloading kernel modules...                             [  OK  ]
         Unmounting configfs...                                  [  OK  ]
      [root@example-01 ~]#
      
    2. At each node, start the cluster software according to Section 9.1.1, “Starting Cluster Software”. For example:
      [root@example-01 ~]# service cman start
      Starting cluster: 
         Checking Network Manager...                             [  OK  ]
         Global setup...                                         [  OK  ]
         Loading kernel modules...                               [  OK  ]
         Mounting configfs...                                    [  OK  ]
         Starting cman...                                        [  OK  ]
         Waiting for quorum...                                   [  OK  ]
         Starting fenced...                                      [  OK  ]
         Starting dlm_controld...                                [  OK  ]
         Starting gfs_controld...                                [  OK  ]
         Unfencing self...                                       [  OK  ]
         Joining fence domain...                                 [  OK  ]
      [root@example-01 ~]# service clvmd start
      Starting clvmd:                                            [  OK  ]
      Activating VG(s):   2 logical volume(s) in volume group "vg_example" now active
                                                                 [  OK  ]
      [root@example-01 ~]# service gfs2 start
      Mounting GFS2 filesystem (/mnt/gfsA):                      [  OK  ]
      Mounting GFS2 filesystem (/mnt/gfsB):                      [  OK  ]
      [root@example-01 ~]# service rgmanager start
      Starting Cluster Service Manager:                          [  OK  ]
      [root@example-01 ~]#
      
  9. At each node to be added to the cluster, start the cluster software according to Section 9.1.1, “Starting Cluster Software”. For example:
    [root@example-01 ~]# service cman start
    Starting cluster: 
       Checking Network Manager...                             [  OK  ]
       Global setup...                                         [  OK  ]
       Loading kernel modules...                               [  OK  ]
       Mounting configfs...                                    [  OK  ]
       Starting cman...                                        [  OK  ]
       Waiting for quorum...                                   [  OK  ]
       Starting fenced...                                      [  OK  ]
       Starting dlm_controld...                                [  OK  ]
       Starting gfs_controld...                                [  OK  ]
       Unfencing self...                                       [  OK  ]
       Joining fence domain...                                 [  OK  ]
    [root@example-01 ~]# service clvmd start
    Starting clvmd:                                            [  OK  ]
    Activating VG(s):   2 logical volume(s) in volume group "vg_example" now active
                                                               [  OK  ]
    [root@example-01 ~]# service gfs2 start
    Mounting GFS2 filesystem (/mnt/gfsA):                      [  OK  ]
    Mounting GFS2 filesystem (/mnt/gfsB):                      [  OK  ]
    
    [root@example-01 ~]# service rgmanager start
    Starting Cluster Service Manager:                          [  OK  ]
    [root@example-01 ~]#
    
  10. At any node, using the clustat utility, verify that each added node is running and part of the cluster. For example:
    [root@example-01 ~]#clustat
    Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
    Member Status: Quorate
    
     Member Name                             ID   Status
     ------ ----                             ---- ------
     node-03.example.com                         3 Online, rgmanager
     node-02.example.com                         2 Online, rgmanager
     node-01.example.com                         1 Online, Local, rgmanager
    
     Service Name                   Owner (Last)                   State         
     ------- ----                   ----- ------                   -----           
     service:example_apache         node-01.example.com            started       
     service:example_apache2        (none)                         disabled
    
    For information about using clustat, see Section 9.3, “Managing High-Availability Services”.
    In addition, you can use cman_tool status to verify node votes, node count, and quorum count. For example:
    [root@example-01 ~]#cman_tool status
    Version: 6.2.0
    Config Version: 19
    Cluster Name: mycluster 
    Cluster Id: 3794
    Cluster Member: Yes
    Cluster Generation: 548
    Membership state: Cluster-Member
    Nodes: 3
    Expected votes: 3
    Total votes: 3
    Node votes: 1
    Quorum: 2  
    Active subsystems: 9
    Flags: 
    Ports Bound: 0 11 177  
    Node name: node-01.example.com
    Node ID: 3
    Multicast addresses: 239.192.14.224 
    Node addresses: 10.15.90.58
    
  11. At any node, you can use the clusvcadm utility to migrate or relocate a running service to the newly joined node. Also, you can enable any disabled services. For information about using clusvcadm, see Section 9.3, “Managing High-Availability Services”

Note

When you add a node to a cluster that uses UDPU transport, you must restart all nodes in the cluster for the change to take effect.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.