Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 7. Adding a node to the cluster


In SAP S/4HANA with ENSA2 setup of your ASCS and ERS instances, you can configure more than two nodes in the cluster to increase the resiliency and flexibility of your environment.

7.1. Preparing a new cluster node

To add a new node to an existing cluster that manages SAP application server instances, you first prepare the instance specific operating system setup in the same way as you have already configured on the existing cluster nodes.

You must execute the following set of steps before you proceed:

Use the SAP Software Manager to prepare the node for an existing instance. See Running Software Provisioning Manager for more details about the SAP software installation.

Prerequisites

  • You have installed and configured the new HA cluster node according to the recommendations from SAP and Red Hat for running SAP application server instances on RHEL 9. See Operating system requirements.
  • You have mounted the following filesystems on the new HA cluster node:

    • /sapmnt
    • /usr/sap/trans
    • /usr/sap/<SID>
  • You have the installation media available on the new system.

Procedure

  1. On the new node, go to the directory where you have extracted the installation media:

    [root]# cd <software_path>
    • Replace <software_path> with the path to your unpacked media, for example, /sapmedia/SWPM20_SP19/.
  2. Run the installer command on the new node:

    [root]# ./sapinst
  3. Open the web installer UI using the link provided in the terminal.
  4. Open the SAP product you want to install and enter the installation option. Expand the High-Availability System option and select Prepare Additional Cluster Node. Click Next.
  5. Provide the requested installation information on each page and click Next to move forward.

    Some steps, like extracting SAP packages, can take a while. Keep an eye on the terminal in which you started the installer for details of the ongoing process that are not displayed in the web UI.

Verification

  1. Check that the new node has the SAP ports in the services file. For example, count the entries that contain SAP System in their port description and compare the result on all existing nodes:

    [root]# grep -i "SAP System" /etc/services | wc -l
    401

    Update the /etc/services file on the node if it is missing entries.

  2. Check that the /usr/sap/hostctrl/ path is present and that the version is the same as on the existing cluster nodes:

    [root]# /usr/sap/hostctrl/exe/saphostexec -version

7.3. Copying the /usr/sap/sapservices file from an existing node

SAP instance services are managed through the local /usr/sap/sapservices file, which is created during the instance installation.

On the new cluster node you do not perform an instance installation. Therefore, you must copy this file from an existing node.

Procedure

  • Copy the /usr/sap/sapservices file directly from one node to the new node, for example, using root ssh keys between node1 and node3:

    [root]# rsync -av node1:/usr/sap/sapservices /usr/sap/sapservices

Verification

  1. Check that the file exists and has the same owner and permissions as on the source node:

    [root]# ls -lh /usr/sap/sapservices
    -rwxr-xr-x. 1 root sapinst 208 Jun 16 13:59 /usr/sap/sapservices
  2. Check that the file contains the configured instances, for example, ASCS and ERS:

    [root]# cat /usr/sap/sapservices
    systemctl --no-ask-password start SAPS4H_20 # sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ASCS20_s4hascs
    systemctl --no-ask-password start SAPS4H_29 # sapstartsrv pf=/usr/sap/S4H/SYS/profile/S4H_ERS29_s4hers

7.4. Configuring the systemd-based SAP startup framework

Systemd integration is the default configuration as of SAP Kernel Release 788. In HA environments you must apply additional modifications to integrate the different systemd services that are involved in the cluster setup.

Prerequisites

  • You have configured the systemd-based SAP startup framework on the existing cluster nodes. Skip this configuration otherwise.

Procedure

  1. Register the ASCS instance. Run the following SAP command as the root user on the new node to create the systemd integration:

    [root]# export LD_LIBRARY_PATH=/usr/sap/<SID>/ASCS<instance>/exe && \
    /usr/sap/<SID>/ASCS<instance>/exe/sapstartsrv \
    pf=/usr/sap/<SID>/SYS/profile/<SID>_ASCS<instance>_<ascs_virtual_hostname> \
    -reg

    The command executes the sapstartsrv service for the selected instance profile and registers the instance service on the current system. It creates the systemd unit for the instance service, if it does not exist, and updates the local /usr/sap/sapservices file.

    • Replace <SID> with your ASCS instance SID, for example, S4H.
    • Replace <instance> with your ASCS instance number, for example, 20.
    • Replace <ascs_virtual_hostname> with the virtual hostname for your ASCS instance, for example, s4hascs.
  2. Register the ERS instance by repeating step 1 for the ERS profile:

    [root]# export LD_LIBRARY_PATH=/usr/sap/<SID>/ERS<instance>/exe && \
    /usr/sap/<SID>/ERS<instance>/exe/sapstartsrv \
    pf=/usr/sap/<SID>/SYS/profile/<SID>_ERS<instance>_<ers_virtual_hostname> \
    -reg
  3. Optional: Register any PAS or AAS instance by repeating step 1 for the respective application server profile. Skip this step if you have not configured PAS or AAS instances in this cluster:

    [root]# export LD_LIBRARY_PATH=/usr/sap/<SID>/D<instance>/exe && \
    /usr/sap/<SID>/D<instance>/exe/sapstartsrv \
    pf=/usr/sap/<SID>/SYS/profile/<SID>_D<instance>_<as_virtual_hostname> \
    -reg
  4. Disable the ASCS, ERS and any other application instance service that the cluster manages:

    [root]# systemctl disable SAP<SID>_<instance>.service
    Removed "/etc/systemd/system/multi-user.target.wants/SAP<SID>_<instance>.service".

    Run this using the ASCS instance number and repeat the command using the ERS instance number.

    Optional: Repeat the same for the PAS or AAS instance services.

  5. Create the systemd drop-in directory for the ASCS, ERS and any other application instance service that the cluster manages:

    [root]# mkdir /etc/systemd/system/SAP<SID>_<instance>.service.d

    Run this using the ASCS instance number and repeat the command using the ERS instance number.

    Optional: Repeat using the PAS or AAS instance number.

  6. Create the drop-in files for the instances in the new directory:

    [root]# cat << EOF > /etc/systemd/system/SAP<SID>_<instance>.service.d/HA.conf
    [Service]
    Restart=no
    EOF

    Run this using the ASCS instance number and repeat the command using the ERS instance number.

    Optional: Repeat using the PAS or AAS instance number.

  7. Reload the systemd units to activate the drop-in configuration:

    [root]# systemctl daemon-reload

Verification

  1. Check that all instances have instance systemd units and that they are disabled on the new node:

    [root]# systemctl list-unit-files SAPS4H*
    UNIT FILE         STATE    PRESET
    SAPS4H_20.service disabled disabled
    SAPS4H_29.service disabled disabled

    Optional: PAS or AAS instance service files are listed as well in all of the verification steps when you have configured the application server instances.

  2. Check that the sapservices file contains entries for every instance on every cluster node:

    [root]# cat /usr/sap/sapservices
    systemctl --no-ask-password start SAPS4H_20 # sapstartsrv pf=/sapmnt/S4H/profile/S4H_ASCS20_s4hascs
    systemctl --no-ask-password start SAPS4H_29 # sapstartsrv pf=/sapmnt/S4H/profile/S4H_ERS29_s4hers
  3. Check that all systemd configuration overrides are present:

    [root]# systemd-delta | grep SAP
    ...
    [EXTENDED]   /etc/systemd/system/SAPS4H_20.service  /etc/systemd/system/SAPS4H_20.service.d/HA.conf
    [EXTENDED]   /etc/systemd/system/SAPS4H_29.service  /etc/systemd/system/SAPS4H_29.service.d/HA.conf

7.5. Configuring the pacemaker cluster on the new node

Prerequisites

  • You have configured the RHEL High Availability repository on the planned cluster nodes.

Procedure

  1. Install the Red Hat High Availability Add-On software packages from the High Availability repository. Choose the same fence agents as you have configured on the existing nodes and execute the installation on the new node:

    [root]# dnf install pcs pacemaker fence-agents-<model>
  2. Start and enable the pcsd service:

    [root]# systemctl enable --now pcsd.service
  3. Optional: If you are running the firewalld service, enable the ports that are required by the Red Hat High Availability Add-On:

    [root]# firewall-cmd --add-service=high-availability
    [root]# firewall-cmd --runtime-to-permanent
  4. Set a password for the user hacluster:

    [root]# passwd hacluster
  5. Authenticate the user hacluster for the new node in the existing cluster. Run this on an existing node, for example, node1:

    [root]# pcs host auth <node3>
    Username: hacluster
    Password:
    <node3>: Authorized
    • Enter the node names with or without FQDN, as defined in the /etc/hosts file.
    • Enter the hacluster user password in the prompt.
  6. Add the new node to the existing cluster. This syncs cluster files between the nodes. Run this on an existing node, for example, node1:

    [root]# pcs cluster node add <node3>
    No addresses specified for host 'node3', using 'node3'
    Disabling sbd...
    node3: sbd disabled
    Sending 'corosync authkey', 'pacemaker authkey' to 'node3'
    node3: successful distribution of the file 'corosync authkey'
    node3: successful distribution of the file 'pacemaker authkey'
    Sending updated corosync.conf to nodes...
    node1: Succeeded
    node3: Succeeded
    node2: Succeeded
    node1: Corosync configuration reloaded
  7. Start the cluster on the new node. Run this on the new node, for example, node3:

    [root]# pcs cluster start
    Starting Cluster...
  8. Enable the cluster to be started automatically on system start, which enables the corosync and pacemaker services. Skip this step if you prefer to manually control the start of the cluster after a node restarts. Run on the new node:

    [root]# pcs cluster enable

Verification

  • Check that the new node is available as a cluster member:

    [root]# pcs cluster status
    Cluster Status:
     Cluster Summary:
       * Stack: corosync (Pacemaker is running)
    …
       * 3 nodes configured
       * 7 resource instances configured
     Node List:
       * Online: [ node1 node2 node3 ]
    
    PCSD Status:
      node3: Online
      node2: Online
      node1: Online

Next steps

7.6. Installing the SAP application server HA components

For detailed steps, refer to Installing the SAP application server HA components.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben