Chapter 3. Installing SAP HANA scale-out for a 8-node HA cluster setup
The examples in the following configuration steps demonstrate the setup on 4 scale-out nodes per HANA site, which results in an installation of 8 HANA nodes.
You can apply the same steps to more scale-out nodes per site. Each HANA site must consist of the same amount of identically configured nodes.
3.1. Managing the firewalld service Copy linkLink copied to clipboard!
On RHEL the firewalld systemd service is enabled by default when installed and starts with a basic configuration.
For your planned SAP landscape you must decide if you want to manage all port and connection requirements in the firewall service on each cluster node, or if this is handled separately in the security design of your network infrastructure. You must disable the firewalld service in the case that you do not need to manage a firewall on the operating system level on each cluster node. If the local firewall service remains running without the necessary port configuration, it blocks the cluster communication and the connections between your SAP systems.
For your SAP landscape and HA setup to work you must implement one of the following options:
3.1.1. Disabling the firewalld service Copy linkLink copied to clipboard!
The firewalld service is installed and enabled by default as part of the "Server" package group. You must disable it if you do not use it in your network security strategy.
Prerequisites
- You are managing firewall rules outside of the individual host operating systems as part of your security concept.
Procedure
Stop and disable the
firewalldservice on each cluster node. The--nowparameter automatically stops the disabled service. Run this on each system of your planned landscape:[root]# systemctl disable --now firewalld.service
Verification
Verify that the
firewalldservice is disabled on each node.[root]# systemctl status firewalld.service○ firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)
3.1.2. Configuring the firewalld service for the SAP landscape Copy linkLink copied to clipboard!
Check the SAP documentation for the Ports and Connections that you have to enable in the firewall for your SAP landscape. Consider all SAP components in your setup that require incoming or outgoing communication and connections between the different hosts in your landscape.
Configure the firewalld service on each of your SAP hosts using the methods that fit your requirements best. Consult Configuring firewalls and packet filters for the details on how to use the firewalld service effectively.
3.2. Configuring the host names in /etc/hosts Copy linkLink copied to clipboard!
For a consistent host name resolution between all systems in your HANA and HA setup we recommend adding them to the /etc/hosts file on each node.
If you configure the HANA Internal Host Name Resolution you must ensure that the /etc/hosts entries for the same host names are consistent with the HANA configuration.
Procedure
Add the host names of all hosts to the
/etc/hostson all cluster nodes:[root]# cat /etc/hosts... 192.168.100.101 dc1hana1.example.com dc1hana1 192.168.100.102 dc1hana2.example.com dc1hana2 192.168.100.103 dc1hana3.example.com dc1hana3 192.168.100.104 dc1hana4.example.com dc1hana4 192.168.100.121 dc2hana1.example.com dc2hana1 192.168.100.122 dc2hana2.example.com dc2hana2 192.168.100.123 dc2hana3.example.com dc2hana3 192.168.100.124 dc2hana4.example.com dc2hana4
Verification
Check that you can ping the hosts. This step is optional and an example only for a basic verification. The system resolves entries in
/etc/hostswhen you use the ping command:[root]# ping dc1hana2.example.comPING dc1hana2.example.com (192.168.100.102) 56(84) bytes of data. 64 bytes from dc1hana2.example.com (192.168.100.102): icmp_seq=1 ttl=64 time=0.017 ms …
3.4. Creating the SAP administrative user and group Copy linkLink copied to clipboard!
In a high-availability environment where the highly available service can move between different systems using shared storage, you must configure the service’s user and groups with identical numerical values for their user ID (UID) and group ID (GID). Different IDs for the same service users or groups cause access conflicts and prevent you from switching the service between the cluster nodes.
Prepare the following operating system group:
-
sapsys
Prepare the following operating system users:
-
sapadm -
<sid>adm, using your target SID
Prerequisites
- You have reserved identical user and group IDs for the required groups and users, for example, in your central identity management system for service users.
Procedure
Create the
sapsysgroup. Use the prepared group ID, for example, ID10001:[root]# groupadd -g 10001 sapsysCreate the
sapadmuser as a member of thesapsysgroup. The user does not need a login shell. Use the prepared user ID, for example ID,10200:[root]# useradd -u 10200 -g sapsys sapadm \ -c 'SAP Local Administrator' -s /sbin/nologinCreate the
<sid>admuser as a member of thesapsysgroup. Use the prepared user ID, for example, ID10210for userrh1adm:[root]# useradd -u 10210 -g sapsys rh1adm \ -c 'SAP HANA Administrator' -s /bin/shAs the user shell, we recommend that you either use
/bin/shor/bin/csh. SAP installations provide user profiles and useful shell aliases in these shells.- Repeat the steps on all nodes.
Verification
Check that the users
sapadmand<sid>admexist and have the correct groups and IDs configured, for example:[root]# id sapadm rh1admuid=10200(sapadm) gid=10001(sapsys) groups=10001(sapsys) uid=10210(rh1adm) gid=10001(sapsys) groups=10001(sapsys)Check that the users have the correct description, home directory and shell defined:
[root]# grep -E 'sapadm|rh1adm' /etc/passwdsapadm:x:10200:10001:SAP Local Administrator:/home/sapadm:/sbin/nologin rh1adm:x:10210:10001:SAP HANA Administrator:/home/rh1adm:/bin/sh- Repeat the check on all nodes and verify that the names and IDs are identical.
3.5. Configuring SSH public-key access for root for all cluster nodes (optional) Copy linkLink copied to clipboard!
There are steps in the configuration in which you potentially require passwordless root access to the cluster nodes . This can be achieved by setting up SSH public-key authentication between different servers. If this can be used depends on your specific HANA setup and the security policies of your company.
Passwordless root access might be needed in the following situations:
- Accessing all hosts of the same HANA site during the database installation. This applies when your HANA site consists of more than one node like in a scale-out setup.
- Accessing the primary site from the secondary site for the HANA system replication configuration.
Procedure
Generate an ssh key pair. When no key type is defined, it creates an Ed25519 key by default, like in the following example for the
rootuser:[root]# ssh-keygenOption 1, if you have ssh
PasswordAuthenticationenabled on the remote system: Use thessh-copy-idtool to add the ssh public key to theauthorized_keyson the remote system. This automatically creates the.ssh/directory andauthorized_keysfile with correct permissions for the target user on the remote system. Run it on the host on which you created the ssh key in the previous step and enter the target user password when prompted:[root]# ssh-copy-id <remote_system>In the case of the
rootuser, this only works if the ssh config allowsPermitRootLoginand you can provide therootuser password in the prompt. Check the ssh configuration setting on the remote system if you face access permission issues even after you have enabledPasswordAuthentication. Consult your security policies before you enable these parameters on your HANA systems.Option 2, if password login to the target user on the remote host is prohibited or otherwise not possible: Configure the ssh key access on the remote system manually.
Create the
.ssh/directory in the target user’s home path on the remote system, if it does not exist yet. Run this on the remote system, for example, for therootuser:[root]# mkdir /root/.sshChange the permissions of the new
.ssh/directory. For security reasons the ssh key access does not work when the permissions are not correct. Run this on the remote system:[root]# chmod 0700 /root/.sshCopy the ssh public key from the .pub file that was created by the previous
ssh-keygen, for example,id_ed25519.pubin the default setting:[root]# cat /root/.ssh/id_ed25519.pubAdd the public key to the
authorized_keysfile. The command creates the file if it does not exist yet, otherwise it appends the key to the existing content. Run this on the remote system, for example, ondc1hana2:[root]# cat << EOF >> /root/.ssh/authorized_keys ssh-ed25519 … root@<node1> EOFEnsure that the
authorized_keysfile has the correct permissions, otherwise the ssh key access is blocked for security reasons:[root]# chmod 0600 /root/.ssh/authorized_keys
Access each system and log in from any source host to any remote host that you require for the setup. On first login you must accept each new connection once in an interactive prompt. This saves each host and key in the ssh
known_hostsfile by default.Option 1: Log in from each host to each other host and accept the key fingerprint once to save it to the local
known_hostsfile. Subsequent logins to the same host will not require further interaction, unless the key changes. This is a security measure to prevent unsolicited changes of the ssh keys. The following example confirms the authenticity of hostdc1hana2:[root]# ssh dc1hana2The authenticity of host 'dc1hana2 (pass:[***])' can't be established. ED25519 key fingerprint is SHA256:pass:[*********************************]. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes …Option 2: If you configure ssh key access between multiple systems you can use
ssh-keyscanto collect the public host key from multiple hosts and save it to the localknown_hostsfile in a single step per host. Run this on each system for which you distributed the public key and list all remote hosts that you potentially access from this node and user, for example, for therootuser on hostdc1hana1:[root]# ssh-keyscan -f - >> /root/.ssh/known_hosts dc1hana1 dc1hana2 dc1hana3 dc1hana4 dc2hana1 dc2hana2 dc2hana3 dc2hana4 <Ctrl-d># dc1hana1:22 SSH-2.0-OpenSSH_8.7 dc1hana1 ssh-ed25519 … # dc1hana2:22 SSH-2.0-OpenSSH_8.7 dc1hana2 ssh-ed25519 … # dc1hana3:22 SSH-2.0-OpenSSH_8.7 dc1hana3 ssh-ed25519 … # dc1hana4:22 SSH-2.0-OpenSSH_8.7 dc1hana4 ssh-ed25519 … # dc2hana1:22 SSH-2.0-OpenSSH_8.7 dc2hana1 ssh-ed25519 … # dc2hana2:22 SSH-2.0-OpenSSH_8.7 dc2hana2 ssh-ed25519 … # dc2hana3:22 SSH-2.0-OpenSSH_8.7 dc2hana3 ssh-ed25519 … # dc2hana4:22 SSH-2.0-OpenSSH_8.7 dc2hana4 ssh-ed25519 …-
-f -allows you to provide a list of hosts on the standard input. Instead of the-you can use a file, which you prepare upfront with the list of hosts. You can also enter a single hostname instead of the-fparameter to collect the key of one host at a time. -
In the case of the standard input list you end the input with
Ctrlandd. -
The
>>shell redirection after the scan command directly appends the collected keys to theknown_hostsfile. If the file does not exist yet it is created in the process.
-
Verification
Check the
known_hostsentries, for example, ondc1hana1:[root]# cat /root/.ssh/known_hostsdc1hana1 ssh-ed25519 ******************************... dc1hana2 ssh-ed25519 ******************************... dc1hana3 ssh-ed25519 ******************************... dc1hana4 ssh-ed25519 ******************************... dc2hana1 ssh-ed25519 ******************************... dc2hana2 ssh-ed25519 ******************************... dc2hana3 ssh-ed25519 ******************************... dc2hana4 ssh-ed25519 ******************************...Test the access from each source system to every remote system and ensure that every connection direction that you possibly need works without interactive prompts:
[root]# ssh <remote_system>
3.6. Installing a scale-out SAP HANA instance Copy linkLink copied to clipboard!
A HANA scale-out configuration consists of at least 2 HANA instances per system replication site.
Install the HANA instances with the same SID and instance number on all nodes. The setup of the system replication sites must be identical.
The following installation steps are an example of an interactive installation using the command-line interface. Check the SAP HANA Server Installation and Update Guide for more information about installation options and other details.
Prerequisites
- You have installed and configured RHEL 9 on all cluster nodes according to the Operating system requirements.
- You have prepared the details for your HANA instances, see SAP HANA planning.
- You have followed the SAP software download guides in Software Download, downloaded the SAP HANA installation media from the SAP Software Download Center and the media is available on each node.
- You have verified that you can resolve the host names of the additional nodes of one site from the main node of the site.
- You have verified that you can connect to the additional nodes of one site from the main node using the root user and ssh.
- You have configured a time synchronization service on all nodes. See Configuring time synchronization for details. You have configured your OS or network firewall services to enable all required communication between the HANA systems. See Configuring the firewalld service for the SAP landscape for references.
Procedure
Go to the directory which contains the installation media, for example,
/sapmedia/hana:[root]# cd /sapmedia/hanaUnpack the installation media:
[root]# unzip <sap_hana_software>.ZIPGo into the path of the unpacked installation media:
[root]# cd /sapmedia/hana/DATA_UNITS/HDB_LCM_LINUX_<arch>Run the SAP HANA Lifecycle Management tool (HDBLCM) for an interactive installation:
[root]# ./hdblcmIn the interactive mode the installer asks you for all the required information, including the System ID (SID), Installation Number (instance), the filesystem location of data and log volumes, and more.
In a scale-out installation you run the installer on the main node of one HANA site and provide any additional nodes of the same site as an installation parameter. For example, you run the installer for site 1 on
dc1hana1and add nodedc1hana2as an additional host name to add when the prompt asks for it.Optionally you can use the batch mode of the command-line installation tool and provide your configuration parameters in one step. For more details see Use Batch Mode to Perform Platform LCM Tasks in the SAP HANA Server Installation and Update Guide.
- Repeat all steps on the main node of the second site. For the HANA system replication to work you must ensure that each HANA site consists of the same amount of systems with an identical HANA configuration.
Verification
Switch to the
<sid>admuser:[root]# su - rh1admCheck the HANA instance runtime information as user
<sid>adm:rh1adm$HDB infoUSER PID PPID %CPU VSZ RSS COMMAND rh1adm 12525 12524 0.2 8836 5568 -sh rh1adm 12584 12525 0.0 7520 3968 \_ /bin/sh /usr/sap/RH1/HDB02/HDB info rh1adm 12613 12584 0.0 10104 3484 \_ ps fx -U rh1adm -o user:8,pid:8,ppid:8,pcpu:5,vsz:10,rss:10,args rh1adm 8813 1 0.0 566804 41000 hdbrsutil --start --port 30203 --volume 3 … rh1adm 8124 1 0.0 566724 40972 hdbrsutil --start --port 30201 --volume 1 … rh1adm 7947 1 0.0 9312 3352 sapstart pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1 rh1adm 7955 7947 0.0 460036 89176 \_ /usr/sap/RH1/HDB02/dc1hana1/trace/hdb.sapRH1_HDB02 -d -nw -f /usr/sap/RH1/HDB02/dc1hana1/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1 rh1adm 7981 7955 26.1 18612328 14092076 \_ hdbnameserver rh1adm 8642 7955 0.5 1465380 212048 \_ hdbcompileserver rh1adm 8645 7955 294 6616736 6049012 \_ hdbpreprocessor rh1adm 8687 7955 33.9 18931580 14929092 \_ hdbindexserver -port 30203 rh1adm 8690 7955 2.0 5073572 1390440 \_ hdbxsengine -port 30207 rh1adm 9202 7955 0.8 2772836 482088 \_ hdbwebdispatcher rh1adm 7782 1 0.1 566772 58444 /usr/sap/RH1/HDB02/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1 root 11868 7782 0.1 10464 4644 \_ sapuxuserchk 0 128Verify as
<sid>admon all sites that the HANA instances are running on all nodes in the site and their status isGREENin the instance list, for example, on site 1:rh1adm$sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceListhostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana4, 2, 50213, 50214, 0.3, HDB|HDB_STANDBY, GREEN dc1hana1, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN dc1hana3, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN dc1hana2, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREENAdditionally, you can verify the
landscapeHostConfiguration.pyoutput for statusok:rh1adm$cdpy; python landscapeHostConfiguration.py| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | -------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | dc1hana1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc1hana2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | dc1hana3 | yes | ok | | | 3 | 3 | default | default | slave | slave | worker | slave | worker | worker | default | default | | dc1hana4 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | overall host status: okCheck that the systemd units are installed for the HANA instance and the SAP Host Agent:
[root]# systemctl list-unit-files --all sap* SAP*UNIT FILE STATE PRESET sapmedia.mount generated - saphostagent.service enabled disabled sapinit.service generated - SAPRH1_02.service enabled disabled SAP.slice static - 5 unit files listed.-
Repeat the steps on all nodes. Note that the HANA profiles contain the individual node name in the format
<SID>_HDB<instance>_<node>.
3.7. Disabling SAP HANA instance autostart Copy linkLink copied to clipboard!
The cluster controls startup and shutdown of the HANA instance in a HA cluster setup. You must configure the HANA instance profile to not automatically start the instance itself.
Procedure
Go to the HANA instance profile directory:
[root]# cd /hana/shared/<SID>/profileEdit the instance profile:
[root]# vi <SID>_HDB<instance>_<hostname>Ensure that
Autostartis set to0.- Repeat the previous steps for each HANA instance that will be managed as part of the HA cluster.
Verification
Check that
Autostart = 0is set in the instance profiles of all HANA instances that will be managed by the HA cluster:[root]# grep Autostart /hana/shared/RH1/profile/*/hana/shared/RH1/profile/RH1_HDB02_dc1hana1:Autostart = 0 /hana/shared/RH1/profile/RH1_HDB02_dc1hana2:Autostart = 0 /hana/shared/RH1/profile/RH1_HDB02_dc1hana3:Autostart = 0 /hana/shared/RH1/profile/RH1_HDB02_dc1hana4:Autostart = 0