Chapter 3. Installing SAP HANA scale-out for a 8-node HA cluster setup


The examples in the following configuration steps demonstrate the setup on 4 scale-out nodes per HANA site, which results in an installation of 8 HANA nodes.

You can apply the same steps to more scale-out nodes per site. Each HANA site must consist of the same amount of identically configured nodes.

3.1. Managing the firewalld service

On RHEL the firewalld systemd service is enabled by default when installed and starts with a basic configuration.

For your planned SAP landscape you must decide if you want to manage all port and connection requirements in the firewall service on each cluster node, or if this is handled separately in the security design of your network infrastructure. You must disable the firewalld service in the case that you do not need to manage a firewall on the operating system level on each cluster node. If the local firewall service remains running without the necessary port configuration, it blocks the cluster communication and the connections between your SAP systems.

For your SAP landscape and HA setup to work you must implement one of the following options:

3.1.1. Disabling the firewalld service

The firewalld service is installed and enabled by default as part of the "Server" package group. You must disable it if you do not use it in your network security strategy.

Prerequisites

  • You are managing firewall rules outside of the individual host operating systems as part of your security concept.

Procedure

  • Stop and disable the firewalld service on each cluster node. The --now parameter automatically stops the disabled service. Run this on each system of your planned landscape:

    [root]# systemctl disable --now firewalld.service

Verification

  • Verify that the firewalld service is disabled on each node.

    [root]# systemctl status firewalld.service
    ○ firewalld.service - firewalld - dynamic firewall daemon
         Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
         Active: inactive (dead)
           Docs: man:firewalld(1)

Check the SAP documentation for the Ports and Connections that you have to enable in the firewall for your SAP landscape. Consider all SAP components in your setup that require incoming or outgoing communication and connections between the different hosts in your landscape.

Configure the firewalld service on each of your SAP hosts using the methods that fit your requirements best. Consult Configuring firewalls and packet filters for the details on how to use the firewalld service effectively.

3.2. Configuring the host names in /etc/hosts

For a consistent host name resolution between all systems in your HANA and HA setup we recommend adding them to the /etc/hosts file on each node.

If you configure the HANA Internal Host Name Resolution you must ensure that the /etc/hosts entries for the same host names are consistent with the HANA configuration.

Procedure

  • Add the host names of all hosts to the /etc/hosts on all cluster nodes:

    [root]# cat /etc/hosts
    ...
    192.168.100.101 dc1hana1.example.com dc1hana1
    192.168.100.102 dc1hana2.example.com dc1hana2
    192.168.100.103 dc1hana3.example.com dc1hana3
    192.168.100.104 dc1hana4.example.com dc1hana4
    192.168.100.121 dc2hana1.example.com dc2hana1
    192.168.100.122 dc2hana2.example.com dc2hana2
    192.168.100.123 dc2hana3.example.com dc2hana3
    192.168.100.124 dc2hana4.example.com dc2hana4

Verification

  • Check that you can ping the hosts. This step is optional and an example only for a basic verification. The system resolves entries in /etc/hosts when you use the ping command:

    [root]# ping dc1hana2.example.com
    PING dc1hana2.example.com (192.168.100.102) 56(84) bytes of data.
    64 bytes from dc1hana2.example.com (192.168.100.102): icmp_seq=1 ttl=64 time=0.017 ms
    …

3.3. Configuring the shared SAP filesystems

You must configure the shared filesystems on all systems of each HANA site they belong to.

Prerequisites

  • You have prepared the shared NFS-based filesystems, and all cluster nodes of each HANA site are able to access their related shares. The NFS shares must be external and not exported on one of the cluster nodes.

Procedure

  1. Create the directories for the shared filesystems:

    [root]# mkdir -p /hana/{shared,data,log}
  2. Add the shared NFS filesystems to /etc/fstab to mount them automatically on system start. Configure the mount options that apply to your environment. The following is a basic example:

    [root]# vi /etc/fstab
    …
    <nfs_server>:/<site_path>/data /hana/data nfs4 defaults 0 0
    <nfs_server>:/<site_path>/log /hana/log nfs4 defaults 0 0
    <nfs_server>:/<site_path>/shared /hana/shared nfs4 defaults 0 0
    • Replace <nfs_server> with the NFS server DNS name or the IP address of each share, for example, nfs01-datacenter1a.example.com.
    • Replace <site_path> with the site specific root path, for example, dc1 on the nodes of one HANA site.
  3. Reload the systemctl daemon to make the new /etc/fstab entries known to systemd:

    [root]# systemctl daemon-reload
  4. Mount any new filesystems that you configured in the /etc/fstab:

    [root]# mount -a
  5. Repeat the configuration steps on each cluster node.

Verification

  1. Check that the filesystems are mounted, for example, on HANA site 1:

    [root]# df -hP | grep hana
    nfs01-datacenter1a.example.com:/dc1/hana/data    8.0E   32G  8.0E   1% /hana/data
    nfs01-datacenter1a.example.com:/dc1/hana/log     8.0E   32G  8.0E   1% /hana/log
    nfs01-datacenter1a.example.com:/dc1/hana/shared  8.0E   32G  8.0E   1% /hana/shared
  2. Check that the systemd mount targets exist for the filesystems configured in the /etc/fstab:

    [root]# systemctl list-units --all | grep -e 'hana*.*mount' | column -t
    hana-data.mount    loaded  active  mounted  /hana/data
    hana-log.mount     loaded  active  mounted  /hana/log
    hana-shared.mount  loaded  active  mounted  /hana/shared
  3. Repeat the verification steps on each cluster node.

In a high-availability environment where the highly available service can move between different systems using shared storage, you must configure the service’s user and groups with identical numerical values for their user ID (UID) and group ID (GID). Different IDs for the same service users or groups cause access conflicts and prevent you from switching the service between the cluster nodes.

Prepare the following operating system group:

  • sapsys

Prepare the following operating system users:

  • sapadm
  • <sid>adm, using your target SID

Prerequisites

  • You have reserved identical user and group IDs for the required groups and users, for example, in your central identity management system for service users.

Procedure

  1. Create the sapsys group. Use the prepared group ID, for example, ID 10001:

    [root]# groupadd -g 10001 sapsys
  2. Create the sapadm user as a member of the sapsys group. The user does not need a login shell. Use the prepared user ID, for example ID, 10200:

    [root]# useradd -u 10200 -g sapsys sapadm \
    -c 'SAP Local Administrator' -s /sbin/nologin
  3. Create the <sid>adm user as a member of the sapsys group. Use the prepared user ID, for example, ID 10210 for user rh1adm:

    [root]# useradd -u 10210 -g sapsys rh1adm \
    -c 'SAP HANA Administrator' -s /bin/sh

    As the user shell, we recommend that you either use /bin/sh or /bin/csh. SAP installations provide user profiles and useful shell aliases in these shells.

  4. Repeat the steps on all nodes.

Verification

  1. Check that the users sapadm and <sid>adm exist and have the correct groups and IDs configured, for example:

    [root]# id sapadm rh1adm
    uid=10200(sapadm) gid=10001(sapsys) groups=10001(sapsys)
    uid=10210(rh1adm) gid=10001(sapsys) groups=10001(sapsys)
  2. Check that the users have the correct description, home directory and shell defined:

    [root]# grep -E 'sapadm|rh1adm' /etc/passwd
    sapadm:x:10200:10001:SAP Local Administrator:/home/sapadm:/sbin/nologin
    rh1adm:x:10210:10001:SAP HANA Administrator:/home/rh1adm:/bin/sh
  3. Repeat the check on all nodes and verify that the names and IDs are identical.

There are steps in the configuration in which you potentially require passwordless root access to the cluster nodes . This can be achieved by setting up SSH public-key authentication between different servers. If this can be used depends on your specific HANA setup and the security policies of your company.

Passwordless root access might be needed in the following situations:

  • Accessing all hosts of the same HANA site during the database installation. This applies when your HANA site consists of more than one node like in a scale-out setup.
  • Accessing the primary site from the secondary site for the HANA system replication configuration.

Procedure

  1. Generate an ssh key pair. When no key type is defined, it creates an Ed25519 key by default, like in the following example for the root user:

    [root]# ssh-keygen
  2. Option 1, if you have ssh PasswordAuthentication enabled on the remote system: Use the ssh-copy-id tool to add the ssh public key to the authorized_keys on the remote system. This automatically creates the .ssh/ directory and authorized_keys file with correct permissions for the target user on the remote system. Run it on the host on which you created the ssh key in the previous step and enter the target user password when prompted:

    [root]# ssh-copy-id <remote_system>

    In the case of the root user, this only works if the ssh config allows PermitRootLogin and you can provide the root user password in the prompt. Check the ssh configuration setting on the remote system if you face access permission issues even after you have enabled PasswordAuthentication. Consult your security policies before you enable these parameters on your HANA systems.

  3. Option 2, if password login to the target user on the remote host is prohibited or otherwise not possible: Configure the ssh key access on the remote system manually.

    1. Create the .ssh/ directory in the target user’s home path on the remote system, if it does not exist yet. Run this on the remote system, for example, for the root user:

      [root]# mkdir /root/.ssh
    2. Change the permissions of the new .ssh/ directory. For security reasons the ssh key access does not work when the permissions are not correct. Run this on the remote system:

      [root]# chmod 0700 /root/.ssh
    3. Copy the ssh public key from the .pub file that was created by the previous ssh-keygen, for example, id_ed25519.pub in the default setting:

      [root]# cat /root/.ssh/id_ed25519.pub
    4. Add the public key to the authorized_keys file. The command creates the file if it does not exist yet, otherwise it appends the key to the existing content. Run this on the remote system, for example, on dc1hana2:

      [root]# cat << EOF >> /root/.ssh/authorized_keys
      ssh-ed25519 … root@<node1>
      EOF
    5. Ensure that the authorized_keys file has the correct permissions, otherwise the ssh key access is blocked for security reasons:

      [root]# chmod 0600 /root/.ssh/authorized_keys
  4. Access each system and log in from any source host to any remote host that you require for the setup. On first login you must accept each new connection once in an interactive prompt. This saves each host and key in the ssh known_hosts file by default.

    1. Option 1: Log in from each host to each other host and accept the key fingerprint once to save it to the local known_hosts file. Subsequent logins to the same host will not require further interaction, unless the key changes. This is a security measure to prevent unsolicited changes of the ssh keys. The following example confirms the authenticity of host dc1hana2:

      [root]# ssh dc1hana2
      The authenticity of host 'dc1hana2 (pass:[***])' can't be established.
      ED25519 key fingerprint is SHA256:pass:[*********************************].
      This key is not known by any other names
      Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
      …
    2. Option 2: If you configure ssh key access between multiple systems you can use ssh-keyscan to collect the public host key from multiple hosts and save it to the local known_hosts file in a single step per host. Run this on each system for which you distributed the public key and list all remote hosts that you potentially access from this node and user, for example, for the root user on host dc1hana1:

      [root]# ssh-keyscan -f - >> /root/.ssh/known_hosts
      dc1hana1
      dc1hana2
      dc1hana3
      dc1hana4
      dc2hana1
      dc2hana2
      dc2hana3
      dc2hana4
      <Ctrl-d>
      # dc1hana1:22 SSH-2.0-OpenSSH_8.7
      dc1hana1 ssh-ed25519 …
      # dc1hana2:22 SSH-2.0-OpenSSH_8.7
      dc1hana2 ssh-ed25519 …
      # dc1hana3:22 SSH-2.0-OpenSSH_8.7
      dc1hana3 ssh-ed25519 …
      # dc1hana4:22 SSH-2.0-OpenSSH_8.7
      dc1hana4 ssh-ed25519 …
      # dc2hana1:22 SSH-2.0-OpenSSH_8.7
      dc2hana1 ssh-ed25519 …
      # dc2hana2:22 SSH-2.0-OpenSSH_8.7
      dc2hana2 ssh-ed25519 …
      # dc2hana3:22 SSH-2.0-OpenSSH_8.7
      dc2hana3 ssh-ed25519 …
      # dc2hana4:22 SSH-2.0-OpenSSH_8.7
      dc2hana4 ssh-ed25519 …
      • -f - allows you to provide a list of hosts on the standard input. Instead of the - you can use a file, which you prepare upfront with the list of hosts. You can also enter a single hostname instead of the -f parameter to collect the key of one host at a time.
      • In the case of the standard input list you end the input with Ctrl and d.
      • The >> shell redirection after the scan command directly appends the collected keys to the known_hosts file. If the file does not exist yet it is created in the process.

Verification

  • Check the known_hosts entries, for example, on dc1hana1:

    [root]# cat /root/.ssh/known_hosts
    dc1hana1 ssh-ed25519 ******************************...
    dc1hana2 ssh-ed25519 ******************************...
    dc1hana3 ssh-ed25519 ******************************...
    dc1hana4 ssh-ed25519 ******************************...
    dc2hana1 ssh-ed25519 ******************************...
    dc2hana2 ssh-ed25519 ******************************...
    dc2hana3 ssh-ed25519 ******************************...
    dc2hana4 ssh-ed25519 ******************************...
  • Test the access from each source system to every remote system and ensure that every connection direction that you possibly need works without interactive prompts:

    [root]# ssh <remote_system>

3.6. Installing a scale-out SAP HANA instance

A HANA scale-out configuration consists of at least 2 HANA instances per system replication site.

Install the HANA instances with the same SID and instance number on all nodes. The setup of the system replication sites must be identical.

The following installation steps are an example of an interactive installation using the command-line interface. Check the SAP HANA Server Installation and Update Guide for more information about installation options and other details.

Prerequisites

  • You have installed and configured RHEL 9 on all cluster nodes according to the Operating system requirements.
  • You have prepared the details for your HANA instances, see SAP HANA planning.
  • You have followed the SAP software download guides in Software Download, downloaded the SAP HANA installation media from the SAP Software Download Center and the media is available on each node.
  • You have verified that you can resolve the host names of the additional nodes of one site from the main node of the site.
  • You have verified that you can connect to the additional nodes of one site from the main node using the root user and ssh.
  • You have configured a time synchronization service on all nodes. See Configuring time synchronization for details. You have configured your OS or network firewall services to enable all required communication between the HANA systems. See Configuring the firewalld service for the SAP landscape for references.

Procedure

  1. Go to the directory which contains the installation media, for example, /sapmedia/hana:

    [root]# cd /sapmedia/hana
  2. Unpack the installation media:

    [root]# unzip <sap_hana_software>.ZIP
  3. Go into the path of the unpacked installation media:

    [root]# cd /sapmedia/hana/DATA_UNITS/HDB_LCM_LINUX_<arch>
  4. Run the SAP HANA Lifecycle Management tool (HDBLCM) for an interactive installation:

    [root]# ./hdblcm

    In the interactive mode the installer asks you for all the required information, including the System ID (SID), Installation Number (instance), the filesystem location of data and log volumes, and more.

    In a scale-out installation you run the installer on the main node of one HANA site and provide any additional nodes of the same site as an installation parameter. For example, you run the installer for site 1 on dc1hana1 and add node dc1hana2 as an additional host name to add when the prompt asks for it.

    Optionally you can use the batch mode of the command-line installation tool and provide your configuration parameters in one step. For more details see Use Batch Mode to Perform Platform LCM Tasks in the SAP HANA Server Installation and Update Guide.

  5. Repeat all steps on the main node of the second site. For the HANA system replication to work you must ensure that each HANA site consists of the same amount of systems with an identical HANA configuration.

Verification

  1. Switch to the <sid>adm user:

    [root]# su - rh1adm
  2. Check the HANA instance runtime information as user <sid>adm:

    rh1adm$HDB info
    USER          PID     PPID  %CPU        VSZ        RSS COMMAND
    rh1adm      12525    12524   0.2       8836       5568 -sh
    rh1adm      12584    12525   0.0       7520       3968  \_ /bin/sh /usr/sap/RH1/HDB02/HDB info
    rh1adm      12613    12584   0.0      10104       3484      \_ ps fx -U rh1adm -o user:8,pid:8,ppid:8,pcpu:5,vsz:10,rss:10,args
    rh1adm       8813        1   0.0     566804      41000 hdbrsutil  --start --port 30203 --volume 3 …
    rh1adm       8124        1   0.0     566724      40972 hdbrsutil  --start --port 30201 --volume 1 …
    rh1adm       7947        1   0.0       9312       3352 sapstart pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1
    rh1adm       7955     7947   0.0     460036      89176  \_ /usr/sap/RH1/HDB02/dc1hana1/trace/hdb.sapRH1_HDB02 -d -nw -f /usr/sap/RH1/HDB02/dc1hana1/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1
    rh1adm       7981     7955  26.1   18612328   14092076      \_ hdbnameserver
    rh1adm       8642     7955   0.5    1465380     212048      \_ hdbcompileserver
    rh1adm       8645     7955   294    6616736    6049012      \_ hdbpreprocessor
    rh1adm       8687     7955  33.9   18931580   14929092      \_ hdbindexserver -port 30203
    rh1adm       8690     7955   2.0    5073572    1390440      \_ hdbxsengine -port 30207
    rh1adm       9202     7955   0.8    2772836     482088      \_ hdbwebdispatcher
    rh1adm       7782        1   0.1     566772      58444 /usr/sap/RH1/HDB02/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1
    root        11868     7782   0.1      10464       4644  \_ sapuxuserchk 0 128
  3. Verify as <sid>adm on all sites that the HANA instances are running on all nodes in the site and their status is GREEN in the instance list, for example, on site 1:

    rh1adm$sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList
    hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
    dc1hana4, 2, 50213, 50214, 0.3, HDB|HDB_STANDBY, GREEN
    dc1hana1, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN
    dc1hana3, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN
    dc1hana2, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN
  4. Additionally, you can verify the landscapeHostConfiguration.py output for status ok:

    rh1adm$cdpy; python landscapeHostConfiguration.py
    | Host     | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
    |          | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
    |          |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
    | -------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
    | dc1hana1 | yes    | ok     |          |        |         1 |         1 | default  | default  | master 1   | master     | worker      | master      | worker  | worker  | default | default |
    | dc1hana2 | yes    | ok     |          |        |         2 |         2 | default  | default  | master 2   | slave      | worker      | slave       | worker  | worker  | default | default |
    | dc1hana3 | yes    | ok     |          |        |         3 |         3 | default  | default  | slave      | slave      | worker      | slave       | worker  | worker  | default | default |
    | dc1hana4 | yes    | ignore |          |        |         0 |         0 | default  | default  | master 3   | slave      | standby     | standby     | standby | standby | default | -       |
    
    overall host status: ok
  5. Check that the systemd units are installed for the HANA instance and the SAP Host Agent:

    [root]# systemctl list-unit-files --all sap* SAP*
    UNIT FILE            STATE     PRESET
    sapmedia.mount       generated -
    saphostagent.service enabled   disabled
    sapinit.service      generated -
    SAPRH1_02.service    enabled   disabled
    SAP.slice            static    -
    
    5 unit files listed.
  6. Repeat the steps on all nodes. Note that the HANA profiles contain the individual node name in the format <SID>_HDB<instance>_<node>.

3.7. Disabling SAP HANA instance autostart

The cluster controls startup and shutdown of the HANA instance in a HA cluster setup. You must configure the HANA instance profile to not automatically start the instance itself.

Procedure

  1. Go to the HANA instance profile directory:

    [root]# cd /hana/shared/<SID>/profile
  2. Edit the instance profile:

    [root]# vi <SID>_HDB<instance>_<hostname>

    Ensure that Autostart is set to 0.

  3. Repeat the previous steps for each HANA instance that will be managed as part of the HA cluster.

Verification

  • Check that Autostart = 0 is set in the instance profiles of all HANA instances that will be managed by the HA cluster:

    [root]# grep Autostart /hana/shared/RH1/profile/*
    /hana/shared/RH1/profile/RH1_HDB02_dc1hana1:Autostart = 0
    /hana/shared/RH1/profile/RH1_HDB02_dc1hana2:Autostart = 0
    /hana/shared/RH1/profile/RH1_HDB02_dc1hana3:Autostart = 0
    /hana/shared/RH1/profile/RH1_HDB02_dc1hana4:Autostart = 0
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top