Chapter 4. Fencing Controller nodes with STONITH
Fencing is the process of isolating a failed node to protect the cluster and the cluster resources. Without fencing, a failed node might result in data corruption in a cluster. Director uses Pacemaker to provide a highly available cluster of Controller nodes.
Pacemaker uses a process called STONITH to fence failed nodes. STONITH is an acronym for "Shoot the other node in the head". STONITH is disabled by default and requires manual configuration so that Pacemaker can control the power management of each node in the cluster.
If a Controller node fails a health check, the Controller node that acts as the Pacemaker designated coordinator (DC) uses the Pacemaker stonith service to fence the impacted Controller node.
Deploying a highly available overcloud without STONITH is not supported. You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters.
4.1. Supported fencing agents Copy linkLink copied to clipboard!
When you deploy a high availability environment with fencing, you can choose the fencing agents based on your environment needs. To change the fencing agent, you must configure additional parameters in the fencing.yaml file.
Red Hat OpenStack Platform (RHOSP) supports the following fencing agents:
- Intelligent Platform Management Interface (IPMI)
- Default fencing mechanism that Red Hat OpenStack Platform (RHOSP) uses to manage fencing.
- STONITH Block Device (SBD)
The SBD (Storage-Based Death) daemon integrates with Pacemaker and a watchdog device to arrange for nodes to reliably shut down when fencing is triggered and in cases where traditional fencing mechanisms are not available.
Important-
SBD fencing is not supported in clusters with remote bare metal or virtual machine nodes that use
pacemaker_remote, so it is not supported if your deployment uses Instance HA. -
fence_sbdandsbd poison-pillfencing with block storage devices are not supported. - SBD fencing is only supported with compatible watchdog devices. For more information, see Support Policies for RHEL High Availability Clusters - sbd and fence_sbd.
-
SBD fencing is not supported in clusters with remote bare metal or virtual machine nodes that use
fence_kdumpUse in deployments with the
kdumpcrash recovery service. If you choose this agent, ensure that you have enough disk space to store the dump files.You can configure this agent as a secondary mechanism in addition to the IPMI,
fence_rhevm, or Redfish fencing agents. If you configure multiple fencing agents, make sure that you allocate enough time for the first agent to complete the task before the second agent starts the next task.Important-
RHOSP director supports only the configuration of the
fence_kdumpSTONITH agent, and not the configuration of the fullkdumpservice that the fencing agent depends on. For information about configuring thekdumpservice, see the article How do I configure fence_kdump in a Red Hat Pacemaker cluster. -
fence_kdumpis not supported if the Pacemaker network traffic interface uses theovs_bridgesorovs_bondsnetwork device. To enablefence_kdump, you must change the network device tolinux_bondorlinux_bridge. For more information about network interface configuration, see Network interface reference.
-
RHOSP director supports only the configuration of the
- Redfish
-
Use in deployments with servers that support the DMTF Redfish APIs. To specify this agent, change the value of the
agentparameter tofence_redfishin thefencing.yamlfile. For more information about Redfish, see the DTMF Documentation. fence_rhevmfor Red Hat Virtualization (RHV)Use to configure fencing for Controller nodes that run in RHV environments. You can generate the
fencing.yamlfile in the same way as you do for IPMI fencing, but you must define thepm_typeparameter in thenodes.jsonfile to use RHV.By default, the
ssl_insecureparameter is set to accept self-signed certificates. You can change the parameter value based on your security requirements.ImportantEnsure that you use a role that has permissions to create and launch virtual machines in RHV, such as
UserVMManager.- Multi-layered fencing
-
You can configure multiple fencing agents to support complex fencing use cases. For example, you can configure IPMI fencing together with
fence_kdump. The order of the fencing agents determines the order in which Pacemaker triggers each mechanism.
4.2. Deploying fencing on the overcloud Copy linkLink copied to clipboard!
To deploy fencing on the overcloud, first review the state of STONITH and Pacemaker and configure the fencing.yaml file. Then, deploy the overcloud and configure additional parameters. Finally, test that fencing is deployed correctly on the overcloud.
Prerequisites
- Choose the correct fencing agent for your deployment. For the list of supported fencing agents, see Section 4.1, “Supported fencing agents”.
-
Ensure that you can access the
nodes.jsonfile that you created when you registered your nodes in director. This file is a required input for thefencing.yamlfile that you generate during deployment. -
The
nodes.jsonfile must contain the MAC address of one of the network interfaces (NICs) on the node. For more information, see Registering Nodes for the Overcloud. -
If you use the Red Hat Virtualization (RHV) fencing agent, use a role that has permissions to manage virtual machines, such as
UserVMManager.
Procedure
-
Log in to each Controller node as the
heat-adminuser. Verify that the cluster is running:
sudo pcs status
$ sudo pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that STONITH is disabled:
sudo pcs property show
$ sudo pcs property showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the fencing agent that you want to use, choose one of the following options:
If you use the IPMI or RHV fencing agent, generate the
fencing.yamlenvironment file:openstack overcloud generate fencing --output fencing.yaml nodes.json
$ openstack overcloud generate fencing --output fencing.yaml nodes.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command converts
iloanddracpower management details to IPMI equivalents.-
If you use a different fencing agent, such as STONITH Block Device (SBD),
fence_kdump, or Redfish, or if you use pre-provisioned nodes, create thefencing.yamlfile manually.
SBD fencing only: Add the following parameter to the
fencing.yamlfile:parameter_defaults: ExtraConfig: pacemaker::corosync::enable_sbd: trueparameter_defaults: ExtraConfig: pacemaker::corosync::enable_sbd: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis step is applicable to initial overcloud deployments only. For more information about how to enable SBD fencing on an existing overcloud, see Enabling sbd fencing in RHEL 7 and 8.
Multi-layered fencing only: Add the level-specific parameters to the generated
fencing.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<parameter>and<value>with the actual parameters and values that the fencing agent requires.Run the
overcloud deploycommand and include thefencing.yamlfile and any other environment files that are relevant for your deployment:openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan \ -e fencing.yaml
openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan \ -e fencing.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow SBD fencing only: Set the watchdog timer device interval and check that the interval is set correctly.
pcs property set stonith-watchdog-timeout=<interval> pcs property show
# pcs property set stonith-watchdog-timeout=<interval> # pcs property showCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Log in to the overcloud as the
stackuser and check that Pacemaker is configured as the resource manager:source stackrc openstack server list | grep controller ssh heat-admin@<controller-x_ip> sudo pcs status | grep fence
$ source stackrc $ openstack server list | grep controller $ ssh heat-admin@<controller-x_ip> $ sudo pcs status | grep fence stonith-overcloud-controller-x (stonith:fence_ipmilan): Started overcloud-controller-yCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, Pacemaker is configured to use a STONITH resource for each of the Controller nodes that are specified in the
fencing.yamlfile.NoteYou must not configure the
fence-resourceprocess on the same node that it controls.Check the fencing resource attributes. The STONITH attribute values must match the values in the
fencing.yamlfile:sudo pcs stonith show <stonith-resource-controller-x>
$ sudo pcs stonith show <stonith-resource-controller-x>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Testing fencing on the overcloud Copy linkLink copied to clipboard!
To test that fencing works correctly, trigger fencing by closing all ports on a Controller node and restarting the server.
This procedure deliberately drops all connections to the Controller node, which causes the node to restart.
Prerequisites
- Fencing is deployed and running on the overcloud. For information on how to deploy fencing, see Section 4.2, “Deploying fencing on the overcloud”.
- Controller node is available for a restart.
Procedure
Log in to a Controller node as the
stackuser and source the credentials file:source stackrc openstack server list | grep controller ssh heat-admin@<controller-x_ip>
$ source stackrc $ openstack server list | grep controller $ ssh heat-admin@<controller-x_ip>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
rootuser and close all connections to the Controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow From a different Controller node, locate the fencing event in the Pacemaker log file:
ssh heat-admin@<controller-x_ip> less /var/log/cluster/corosync.log
$ ssh heat-admin@<controller-x_ip> $ less /var/log/cluster/corosync.log (less): /fenc*Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the STONITH service performed the fencing action on the Controller, the log file shows a fencing event.
-
Wait a few minutes and then verify that the restarted Controller node is running in the cluster again by running the
pcs statuscommand. If you can see the Controller node that you restarted in the output, fencing functions correctly.
4.4. Viewing STONITH device information Copy linkLink copied to clipboard!
To see how STONITH configures your fencing devices, run the pcs stonith show --full command from the overcloud.
Prerequisites
- Fencing is deployed and running on the overcloud. For information on how to deploy fencing, see Section 4.2, “Deploying fencing on the overcloud”.
Procedure
Show the list of Controller nodes and the status of their STONITH devices:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows the following information for each resource:
-
IPMI power management service that the fencing device uses to turn the machines on and off as needed, such as
fence_ipmilan. -
IP address of the IPMI interface, such as
10.100.0.51. -
User name to log in with, such as
admin. -
Password to use to log in to the node, such as
abc. -
Interval in seconds at which each host is monitored, such as
60s.
-
IPMI power management service that the fencing device uses to turn the machines on and off as needed, such as
4.5. Fencing parameters Copy linkLink copied to clipboard!
When you deploy fencing on the overcloud, you generate the fencing.yaml file with the required parameters to configure fencing.
The following example shows the structure of the fencing.yaml environment file:
This file contains the following parameters:
- EnableFencing
- Enables the fencing functionality for Pacemaker-managed nodes.
- FencingConfig
Lists the fencing devices and the parameters for each device:
-
agent: Fencing agent name. -
host_mac: The mac address in lowercase of the provisioning interface or any other network interface on the server. You can use this as a unique identifier for the fencing device. -
params: List of fencing device parameters.
-
- Fencing device parameters
Lists the fencing device parameters. This example shows the parameters for the IPMI fencing agent:
-
auth: IPMI authentication type (md5,password, or none). -
ipaddr: IPMI IP address. -
ipport: IPMI port. -
login: Username for the IPMI device. -
passwd: Password for the IPMI device. -
lanplus: Use lanplus to improve security of connection. -
privlvl: Privilege level on IPMI device -
pcmk_host_list: List of Pacemaker hosts.
-