5.5. Adding and Deleting Members


The procedure to add a member to a cluster varies depending on whether the cluster is a newly-configured cluster or a cluster that is already configured and running. To add a member to a new cluster, refer to Section 5.5.1, “Adding a Member to a Cluster”. To add a member to an existing cluster, refer to Section 5.5.2, “Adding a Member to a Running Cluster”. To delete a member from a cluster, refer to Section 5.5.3, “Deleting a Member from a Cluster”.

5.5.1. Adding a Member to a Cluster

To add a member to a new cluster, follow these steps:
  1. Click Cluster Node.
  2. At the bottom of the right frame (labeled Properties), click the Add a Cluster Node button. Clicking that button causes a Node Properties dialog box to be displayed. The Node Properties dialog box presents text boxes for Cluster Node Name and Quorum Votes (refer to Figure 5.5, “Adding a Member to a New Cluster”).
    Adding a Member to a New Cluster

    Figure 5.5. Adding a Member to a New Cluster

  3. At the Cluster Node Name text box, specify a node name. The entry can be a name or an IP address of the node on the cluster subnet.

    Note

    Each node must be on the same subnet as the node from which you are running the Cluster Configuration Tool and must be defined either in DNS or in the /etc/hosts file of each cluster node.

    Note

    The node on which you are running the Cluster Configuration Tool must be explicitly added as a cluster member; the node is not automatically added to the cluster configuration as a result of running the Cluster Configuration Tool.
  4. Optionally, at the Quorum Votes text box, you can specify a value; however in most configurations you can leave it blank. Leaving the Quorum Votes text box blank causes the quorum votes value for that node to be set to the default value of 1.
  5. Click OK.
  6. Configure fencing for the node:
    1. Click the node that you added in the previous step.
    2. At the bottom of the right frame (below Properties), click Manage Fencing For This Node. Clicking Manage Fencing For This Node causes the Fence Configuration dialog box to be displayed.
    3. At the Fence Configuration dialog box, bottom of the right frame (below Properties), click Add a New Fence Level. Clicking Add a New Fence Level causes a fence-level element (for example, Fence-Level-1, Fence-Level-2, and so on) to be displayed below the node in the left frame of the Fence Configuration dialog box.
    4. Click the fence-level element.
    5. At the bottom of the right frame (below Properties), click Add a New Fence to this Level. Clicking Add a New Fence to this Level causes the Fence Properties dialog box to be displayed.
    6. At the Fence Properties dialog box, click the Fence Device Type drop-down box and select the fence device for this node. Also, provide additional information required (for example, Port and Switch for an APC Power Device).
    7. At the Fence Properties dialog box, click OK. Clicking OK causes a fence device element to be displayed below the fence-level element.
    8. To create additional fence devices at this fence level, return to step 6d. Otherwise, proceed to the next step.
    9. To create additional fence levels, return to step 6c. Otherwise, proceed to the next step.
    10. If you have configured all the fence levels and fence devices for this node, click Close.
  7. Choose File => Save to save the changes to the cluster configuration.

5.5.2. Adding a Member to a Running Cluster

The procedure for adding a member to a running cluster depends on whether the cluster contains only two nodes or more than two nodes. To add a member to a running cluster, follow the steps in one of the following sections according to the number of nodes in the cluster:

5.5.2.1. Adding a Member to a Running Cluster That Contains Only Two Nodes

To add a member to an existing cluster that is currently in operation, and contains only two nodes, follow these steps:
  1. Add the node and configure fencing for it as in
  2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster.
  3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.
  4. At the Red Hat Cluster Suite management GUI Cluster Status Tool tab, disable each service listed under Services.
  5. Stop the cluster software on the two running nodes by running the following commands at each node in this order:
    1. service rgmanager stop
    2. service gfs stop, if you are using Red Hat GFS
    3. service clvmd stop, if CLVM has been used to create clustered volumes
    4. service cman stop
  6. Start cluster software on all cluster nodes (including the added one) by running the following commands in this order:
    1. service cman start
    2. service clvmd start, if CLVM has been used to create clustered volumes
    3. service gfs start, if you are using Red Hat GFS
    4. service rgmanager start
  7. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.

5.5.2.2. Adding a Member to a Running Cluster That Contains More Than Two Nodes

To add a member to an existing cluster that is currently in operation, and contains more than two nodes, follow these steps:
  1. Add the node and configure fencing for it as in
  2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster.
  3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.
  4. Start cluster services on the new node by running the following commands in this order:
    1. service cman start
    2. service clvmd start, if CLVM has been used to create clustered volumes
    3. service gfs start, if you are using Red Hat GFS
    4. service rgmanager start
  5. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.

5.5.3. Deleting a Member from a Cluster

To delete a member from an existing cluster that is currently in operation, follow these steps:
  1. At one of the running nodes (not to be removed), run the Red Hat Cluster Suite management GUI. At the Cluster Status Tool tab, under Services, disable or relocate each service that is running on the node to be deleted.
  2. Stop the cluster software on the node to be deleted by running the following commands at that node in this order:
    1. service rgmanager stop
    2. service gfs stop, if you are using Red Hat GFS
    3. service clvmd stop, if CLVM has been used to create clustered volumes
    4. service cman stop
  3. At the Cluster Configuration Tool (on one of the running members), delete the member as follows:
    1. If necessary, click the triangle icon to expand the Cluster Nodes property.
    2. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties), click the Delete Node button.
    3. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion (Figure 5.6, “Confirm Deleting a Member”).
      Confirm Deleting a Member

      Figure 5.6. Confirm Deleting a Member

    4. At that dialog box, click Yes to confirm deletion.
    5. Propagate the updated configuration by clicking the Send to Cluster button. (Propagating the updated configuration automatically saves the configuration.)
  4. Stop the cluster software on the remaining running nodes by running the following commands at each node in this order:
    1. service rgmanager stop
    2. service gfs stop, if you are using Red Hat GFS
    3. service clvmd stop, if CLVM has been used to create clustered volumes
    4. service cman stop
  5. Start cluster software on all remaining cluster nodes by running the following commands in this order:
    1. service cman start
    2. service clvmd start, if CLVM has been used to create clustered volumes
    3. service gfs start, if you are using Red Hat GFS
    4. service rgmanager start
  6. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.

5.5.3.1. Removing a Member from a Cluster at the Command-Line

If desired, you can also manually relocate and remove cluster members by using the clusvcadm commmand at a shell prompt.
  1. To prevent service downtime, any services running on the member to be removed must be relocated to another node on the cluster by running the following command:
    clusvcadm -r cluster_service_name -m cluster_node_name
    
    Where cluster_service_name is the name of the service to be relocated and cluster_member_name is the name of the member to which the service will be relocated.
  2. Stop the cluster software on the node to be removed by running the following commands at that node in this order:
    1. service rgmanager stop
    2. service gfs stop and/or service gfs2 stop, if you are using gfs, gfs2 or both
    3. umount -a -t gfs and/or umount -a -t gfs2, if you are using either (or both) in conjunction with rgmanager
    4. service clvmd stop, if CLVM has been used to create clustered volumes
    5. service cman stop remove
  3. To ensure that the removed member does not rejoin the cluster after it reboots, run the following set of commands:
    chkconfig cman off
    chkconfig rgmanager off
    chkconfig clvmd off
    chkconfig gfs off
    chkconfig gfs2 off
    
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.