9.2. Deleting or Adding a Node


This section describes how to delete a node from a cluster and add a node to a cluster. You can delete a node from a cluster according to Section 9.2.1, “Deleting a Node from a Cluster”; you can add a node to a cluster according to Section 9.2.2, “Adding a Node to a Cluster”.

9.2.1. Deleting a Node from a Cluster

Deleting a node from a cluster consists of shutting down the cluster software on the node to be deleted and updating the cluster configuration to reflect the change.

Important

If deleting a node from the cluster causes a transition from greater than two nodes to two nodes, you must restart the cluster software at each node after updating the cluster configuration file.
To delete a node from a cluster, perform the following steps:
  1. At any node, use the clusvcadm utility to relocate, migrate, or stop each HA service running on the node that is being deleted from the cluster. For information about using clusvcadm, see Section 9.3, “Managing High-Availability Services”.
  2. At the node to be deleted from the cluster, stop the cluster software according to Section 9.1.2, “Stopping Cluster Software”. For example:
    [root@example-01 ~]# service rgmanager stop
    Stopping Cluster Service Manager:                          [  OK  ]
    [root@example-01 ~]# service gfs2 stop
    Unmounting GFS2 filesystem (/mnt/gfsA):                    [  OK  ]
    Unmounting GFS2 filesystem (/mnt/gfsB):                    [  OK  ]
    [root@example-01 ~]# service clvmd stop
    Signaling clvmd to exit                                    [  OK  ]
    clvmd terminated                                           [  OK  ]
    [root@example-01 ~]# service cman stop
    Stopping cluster: 
       Leaving fence domain...                                 [  OK  ]
       Stopping gfs_controld...                                [  OK  ]
       Stopping dlm_controld...                                [  OK  ]
       Stopping fenced...                                      [  OK  ]
       Stopping cman...                                        [  OK  ]
       Waiting for corosync to shutdown:                       [  OK  ]
       Unloading kernel modules...                             [  OK  ]
       Unmounting configfs...                                  [  OK  ]
    [root@example-01 ~]#
    
  3. At any node in the cluster, edit the /etc/cluster/cluster.conf to remove the clusternode section of the node that is to be deleted. For example, in Example 9.1, “Three-node Cluster Configuration”, if node-03.example.com is supposed to be removed, then delete the clusternode section for that node. If removing a node (or nodes) causes the cluster to be a two-node cluster, you can add the following line to the configuration file to allow a single node to maintain quorum (for example, if one node fails):
    <cman two_node="1" expected_votes="1"/>
    Refer to Section 9.2.3, “Examples of Three-Node and Two-Node Configurations” for comparison between a three-node and a two-node configuration.
  4. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3">).
  5. Save /etc/cluster/cluster.conf.
  6. (Optional) Validate the updated file against the cluster schema (cluster.rng) by running the ccs_config_validate command. For example:
    [root@example-01 ~]# ccs_config_validate 
    Configuration validates
    
  7. Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes.
  8. Verify that the updated configuration file has been propagated.
  9. If the node count of the cluster has transitioned from greater than two nodes to two nodes, you must restart the cluster software as follows:
    1. At each node, stop the cluster software according to Section 9.1.2, “Stopping Cluster Software”. For example:
      [root@example-01 ~]# service rgmanager stop
      Stopping Cluster Service Manager:                          [  OK  ]
      [root@example-01 ~]# service gfs2 stop
      Unmounting GFS2 filesystem (/mnt/gfsA):                    [  OK  ]
      Unmounting GFS2 filesystem (/mnt/gfsB):                    [  OK  ]
      [root@example-01 ~]# service clvmd stop
      Signaling clvmd to exit                                    [  OK  ]
      clvmd terminated                                           [  OK  ]
      [root@example-01 ~]# service cman stop
      Stopping cluster: 
         Leaving fence domain...                                 [  OK  ]
         Stopping gfs_controld...                                [  OK  ]
         Stopping dlm_controld...                                [  OK  ]
         Stopping fenced...                                      [  OK  ]
         Stopping cman...                                        [  OK  ]
         Waiting for corosync to shutdown:                       [  OK  ]
         Unloading kernel modules...                             [  OK  ]
         Unmounting configfs...                                  [  OK  ]
      [root@example-01 ~]#
      
    2. At each node, start the cluster software according to Section 9.1.1, “Starting Cluster Software”. For example:
      [root@example-01 ~]# service cman start
      Starting cluster: 
         Checking Network Manager...                             [  OK  ]
         Global setup...                                         [  OK  ]
         Loading kernel modules...                               [  OK  ]
         Mounting configfs...                                    [  OK  ]
         Starting cman...                                        [  OK  ]
         Waiting for quorum...                                   [  OK  ]
         Starting fenced...                                      [  OK  ]
         Starting dlm_controld...                                [  OK  ]
         Starting gfs_controld...                                [  OK  ]
         Unfencing self...                                       [  OK  ]
         Joining fence domain...                                 [  OK  ]
      [root@example-01 ~]# service clvmd start
      Starting clvmd:                                            [  OK  ]
      Activating VG(s):   2 logical volume(s) in volume group "vg_example" now active
                                                                 [  OK  ]
      [root@example-01 ~]# service gfs2 start
      Mounting GFS2 filesystem (/mnt/gfsA):                      [  OK  ]
      Mounting GFS2 filesystem (/mnt/gfsB):                      [  OK  ]
      [root@example-01 ~]# service rgmanager start
      Starting Cluster Service Manager:                          [  OK  ]
      [root@example-01 ~]#
      
    3. At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
      [root@example-01 ~]# cman_tool nodes
      Node  Sts   Inc   Joined               Name
         1   M    548   2010-09-28 10:52:21  node-01.example.com
         2   M    548   2010-09-28 10:52:21  node-02.example.com
      
    4. At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example:
      [root@example-01 ~]#clustat
      Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
      Member Status: Quorate
      
       Member Name                             ID   Status
       ------ ----                             ---- ------
       node-02.example.com                         2 Online, rgmanager
       node-01.example.com                         1 Online, Local, rgmanager
      
       Service Name                   Owner (Last)                   State         
       ------- ----                   ----- ------                   -----           
       service:example_apache         node-01.example.com            started       
       service:example_apache2        (none)                         disabled
      
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.