Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 7. Adding a node to the cluster
In SAP S/4HANA with ENSA2 setup of your ASCS and ERS instances, you can configure more than two nodes in the cluster to increase the resiliency and flexibility of your environment.
7.1. Preparing a new cluster node Link kopierenLink in die Zwischenablage kopiert!
To add a new node to an existing cluster that manages SAP application server instances, you first prepare the instance specific operating system setup in the same way as you have already configured on the existing cluster nodes.
You must execute the following set of steps before you proceed:
7.2. Preparing the new node for application server instances using SAP SWPM Link kopierenLink in die Zwischenablage kopiert!
Use the SAP Software Manager to prepare the node for an existing instance. See Running Software Provisioning Manager for more details about the SAP software installation.
Prerequisites
- You have installed and configured the new HA cluster node according to the recommendations from SAP and Red Hat for running SAP application server instances on RHEL 9. See Operating system requirements.
You have mounted the following filesystems on the new HA cluster node:
-
/sapmnt -
/usr/sap/trans -
/usr/sap/<SID>
-
- You have the installation media available on the new system.
Procedure
On the new node, go to the directory where you have extracted the installation media:
[root]# cd <software_path>-
Replace
<software_path>with the path to your unpacked media, for example,/sapmedia/SWPM20_SP19/.
-
Replace
Run the installer command on the new node:
[root]# ./sapinst- Open the web installer UI using the link provided in the terminal.
- Open the SAP product you want to install and enter the installation option. Expand the High-Availability System option and select Prepare Additional Cluster Node. Click Next.
Provide the requested installation information on each page and click Next to move forward.
Some steps, like extracting SAP packages, can take a while. Keep an eye on the terminal in which you started the installer for details of the ongoing process that are not displayed in the web UI.
Verification
Check that the new node has the SAP ports in the services file. For example, count the entries that contain
SAP Systemin their port description and compare the result on all existing nodes:[root]# grep -i "SAP System" /etc/services | wc -l 401Update the
/etc/servicesfile on the node if it is missing entries.Check that the
/usr/sap/hostctrl/path is present and that the version is the same as on the existing cluster nodes:[root]# /usr/sap/hostctrl/exe/saphostexec -version
7.3. Copying the /usr/sap/sapservices file from an existing node Link kopierenLink in die Zwischenablage kopiert!
SAP instance services are managed through the local /usr/sap/sapservices file, which is created during the instance installation.
On the new cluster node you do not perform an instance installation. Therefore, you must copy this file from an existing node.
Procedure
Copy the
/usr/sap/sapservicesfile directly from one node to the new node, for example, using root ssh keys between node1 and node3:[root]# rsync -av node1:/usr/sap/sapservices /usr/sap/sapservices
Verification
Check that the file exists and has the same owner and permissions as on the source node:
[root]# ls -lh /usr/sap/sapservices -rwxr-xr-x. 1 root sapinst 208 Jun 16 13:59 /usr/sap/sapservicesCheck that the file contains the configured instances, for example, ASCS and ERS:
[root]# cat /usr/sap/sapservices systemctl --no-ask-password start SAPS4H_20 # sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ASCS20_s4hascs systemctl --no-ask-password start SAPS4H_29 # sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ERS29_s4hers
7.4. Configuring the systemd-based SAP startup framework Link kopierenLink in die Zwischenablage kopiert!
Systemd integration is the default configuration as of SAP Kernel Release 788. In HA environments you must apply additional modifications to integrate the different systemd services that are involved in the cluster setup.
Prerequisites
- You have configured the systemd-based SAP startup framework on the existing cluster nodes. Skip this configuration otherwise.
Procedure
Register the ASCS instance. Run the following SAP command as the root user on the new node to create the systemd integration:
[root]# export LD_LIBRARY_PATH=/usr/sap/<SID>/ASCS<instance>/exe && \ /usr/sap/<SID>/ASCS<instance>/exe/sapstartsrv \ pf=/usr/sap/<SID>/SYS/profile/<SID>_ASCS<instance>_<ascs_virtual_hostname> \ -regThe command executes the
sapstartsrvservice for the selected instance profile and registers the instance service on the current system. It creates the systemd unit for the instance service, if it does not exist, and updates the local/usr/sap/sapservicesfile.-
Replace
<SID>with your ASCS instance SID, for example,S4H. -
Replace
<instance>with your ASCS instance number, for example,20. -
Replace
<ascs_virtual_hostname>with the virtual hostname for your ASCS instance, for example,s4hascs.
-
Replace
Register the ERS instance by repeating step 1 for the ERS profile:
[root]# export LD_LIBRARY_PATH=/usr/sap/<SID>/ERS<instance>/exe && \ /usr/sap/<SID>/ERS<instance>/exe/sapstartsrv \ pf=/usr/sap/<SID>/SYS/profile/<SID>_ERS<instance>_<ers_virtual_hostname> \ -regOptional: Register any PAS or AAS instance by repeating step 1 for the respective application server profile. Skip this step if you have not configured PAS or AAS instances in this cluster:
[root]# export LD_LIBRARY_PATH=/usr/sap/<SID>/D<instance>/exe && \ /usr/sap/<SID>/D<instance>/exe/sapstartsrv \ pf=/usr/sap/<SID>/SYS/profile/<SID>_D<instance>_<as_virtual_hostname> \ -regDisable the ASCS, ERS and any other application instance service that the cluster manages:
[root]# systemctl disable SAP<SID>_<instance>.service Removed "/etc/systemd/system/multi-user.target.wants/SAP<SID>_<instance>.service".Run this using the ASCS instance number and repeat the command using the ERS instance number.
Optional: Repeat the same for the PAS or AAS instance services.
Create the systemd drop-in directory for the ASCS, ERS and any other application instance service that the cluster manages:
[root]# mkdir /etc/systemd/system/SAP<SID>_<instance>.service.dRun this using the ASCS instance number and repeat the command using the ERS instance number.
Optional: Repeat using the PAS or AAS instance number.
Create the drop-in files for the instances in the new directory:
[root]# cat << EOF > /etc/systemd/system/SAP<SID>_<instance>.service.d/HA.conf [Service] Restart=no EOFRun this using the ASCS instance number and repeat the command using the ERS instance number.
Optional: Repeat using the PAS or AAS instance number.
Reload the systemd units to activate the drop-in configuration:
[root]# systemctl daemon-reload
Verification
Check that all instances have instance systemd units and that they are disabled on the new node:
[root]# systemctl list-unit-files SAPS4H* UNIT FILE STATE PRESET SAPS4H_20.service disabled disabled SAPS4H_29.service disabled disabledOptional: PAS or AAS instance service files are listed as well in all of the verification steps when you have configured the application server instances.
Check that the
sapservicesfile contains entries for every instance on every cluster node:[root]# cat /usr/sap/sapservices systemctl --no-ask-password start SAPS4H_20 # sapstartsrv pf=/sapmnt/S4H/profile/S4H_ASCS20_s4hascs systemctl --no-ask-password start SAPS4H_29 # sapstartsrv pf=/sapmnt/S4H/profile/S4H_ERS29_s4hersCheck that all systemd configuration overrides are present:
[root]# systemd-delta | grep SAP ... [EXTENDED] /etc/systemd/system/SAPS4H_20.service/etc/systemd/system/SAPS4H_20.service.d/HA.conf [EXTENDED] /etc/systemd/system/SAPS4H_29.service /etc/systemd/system/SAPS4H_29.service.d/HA.conf
7.5. Configuring the pacemaker cluster on the new node Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
You have configured the
RHEL High Availabilityrepository on the planned cluster nodes.
Procedure
Install the Red Hat High Availability Add-On software packages from the High Availability repository. Choose the same fence agents as you have configured on the existing nodes and execute the installation on the new node:
[root]# dnf install pcs pacemaker fence-agents-<model>Start and enable the
pcsdservice:[root]# systemctl enable --now pcsd.serviceOptional: If you are running the
firewalldservice, enable the ports that are required by the Red Hat High Availability Add-On:[root]# firewall-cmd --add-service=high-availability [root]# firewall-cmd --runtime-to-permanentSet a password for the user
hacluster:[root]# passwd haclusterAuthenticate the user
haclusterfor the new node in the existing cluster. Run this on an existing node, for example, node1:[root]# pcs host auth <node3> Username: hacluster Password: <node3>: Authorized-
Enter the node names with or without FQDN, as defined in the
/etc/hostsfile. -
Enter the
haclusteruser password in the prompt.
-
Enter the node names with or without FQDN, as defined in the
Add the new node to the existing cluster. This syncs cluster files between the nodes. Run this on an existing node, for example, node1:
[root]# pcs cluster node add <node3> No addresses specified for host 'node3', using 'node3' Disabling sbd... node3: sbd disabled Sending 'corosync authkey', 'pacemaker authkey' to 'node3' node3: successful distribution of the file 'corosync authkey' node3: successful distribution of the file 'pacemaker authkey' Sending updated corosync.conf to nodes... node1: Succeeded node3: Succeeded node2: Succeeded node1: Corosync configuration reloadedStart the cluster on the new node. Run this on the new node, for example, node3:
[root]# pcs cluster start Starting Cluster...Enable the cluster to be started automatically on system start, which enables the
corosyncandpacemakerservices. Skip this step if you prefer to manually control the start of the cluster after a node restarts. Run on the new node:[root]# pcs cluster enable
Verification
Check that the new node is available as a cluster member:
[root]# pcs cluster status Cluster Status: Cluster Summary: * Stack: corosync (Pacemaker is running) … * 3 nodes configured * 7 resource instances configured Node List: * Online: [ node1 node2 node3 ] PCSD Status: node3: Online node2: Online node1: Online
Next steps
- Configure a fencing device for the new node. See Configuring fencing in a Red Hat High Availability cluster.
- Test the fencing of the new node before you proceed with further configuration of the cluster. For more information, see How to test fence devices and fencing configuration in a Red Hat High Availability cluster?.
7.6. Installing the SAP application server HA components Link kopierenLink in die Zwischenablage kopiert!
For detailed steps, refer to Installing the SAP application server HA components.