Chapter 4. Configuring the SAP HANA system replication


You must configure and test the SAP HANA system replication before you can configure the HANA instance in a cluster. Follow the SAP guidelines for the HANA system replication setup: SAP HANA System Replication: Configuration.

SAP HANA configuration

SAP HANA must be installed and configured identically on both nodes.

Hostname resolution

Both systems must be able to resolve the FQDN of both systems. To ensure that FQDNs can be resolved even without DNS you can place them into /etc/hosts.

Example of /etc/hosts entries for the SAP HANA scale-up systems:

[root]# cat /etc/hosts
...
192.168.0.11 node1.example.com node1
192.168.0.12 node2.example.com node2
Copy to Clipboard Toggle word wrap
Note

As documented at hostname | SAP Help Portal, SAP HANA only supports hostnames with lowercase characters.

SAP HANA log_mode

For the system replication to work, you must set the SAP HANA log_mode variable to normal, which is the default value.

Verify the current log_mode as the HANA administrative user <sid>adm on both nodes:

rh1adm $ hdbsql -u system -p '<HANA_SYSTEM_PASSWORD>' -i ${TINSTANCE} \
"select value from "SYS"."M_INIFILE_CONTENTS" where key='log_mode'"
VALUE "normal"
1 row selected
Copy to Clipboard Toggle word wrap

4.2. Performing an initial HANA instance backup

You can only enable the HANA system replication when an initial backup of the SAP HANA instance exists on the primary instance for the planned SAP HANA system replication setup.

You can use SAP HANA tools to create the backup and skip the manual procedure. See SAP HANA Administration Guide - SAP HANA Database Backup and Recovery for more information.

Prerequisites

  • You have a writable directory to which the backup files are saved for the SAP HANA administrative user <sid>adm.
  • You have sufficient free space available in the filesystem on which the backup files are stored.

Procedure

  1. Optional: Create a dedicated directory for the backup in a suitable path, for example:

    [root]# mkdir <path>/<SID>-backup
    Copy to Clipboard Toggle word wrap

    Replace <path> with a path on your system, which has enough free space for the initial backup files.

  2. Change the owner of the backup path to user <sid>adm if the target directory is not already owned or writable by the HANA user, for example:

    [root]# chown <sid>adm:sapsys <path>/<SID>-backup
    Copy to Clipboard Toggle word wrap
  3. Change to the <sid>adm user for the remaining steps:

    [root]# su - <sid>adm
    Copy to Clipboard Toggle word wrap
  4. Create a backup of the SYSTEMDB as the <sid>adm user. Specify the path to the files the backups will be stored in. Ensure that the target filesystem has enough free space left, then create the backup:

    rh1adm $ hdbsql -i ${TINSTANCE} -u system -p '<HANA_SYSTEM_PASSWORD>' -d SYSTEMDB \
    "BACKUP DATA USING FILE ('<path>/${SAPSYSTEMNAME}-backup/bkp-SYS')"
    0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)
    Copy to Clipboard Toggle word wrap
    • TINSTANCE and SAPSYSTEMNAME are environment variables that are part of the <sid>adm user shell environment. TINSTANCE is the instance number and SAPSYSTEMNAME is the. Both are automatically set to the instance values related to the <sid>adm user.
    • Replace <path> with the path where the <sid>adm user has write access and where there is enough free space left.
  5. Create a backup of all tenant databases as the <sid>adm user. Specify the path to the files the backups will be stored in. Ensure that the target filesystem has enough free space left. Create the tenant DB backup:

    rh1adm $ hdbsql -i ${TINSTANCE} -u system -p '<HANA_SYSTEM_PASSWORD>' -d SYSTEMDB \
    "BACKUP DATA FOR ${SAPSYSTEMNAME} USING FILE ('<path>/${SAPSYSTEMNAME}-backup/bkp-${SAPSYSTEMNAME}')"
    0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)
    Copy to Clipboard Toggle word wrap

    Replace <path> with the path where the <sid>adm user has write access and where there is enough free space left.

Verification

  1. List the resulting backup files. Example when using /hana/log/RH1-backup as the directory to store the initial backup:

    rh1adm $ ls -lh /hana/log/RH1-backup/
    total 7.2G
    -rw-r-----. 1 rh1adm sapsys 156K May  8 08:40 bkp-RH1_databackup_0_1
    -rw-r-----. 1 rh1adm sapsys  81M May  8 08:40 bkp-RH1_databackup_2_1
    -rw-r-----. 1 rh1adm sapsys 3.6G May  8 08:40 bkp-RH1_databackup_3_1
    -rw-r-----. 1 rh1adm sapsys 160K May  8 08:35 bkp-SYS_databackup_0_1
    -rw-r-----. 1 rh1adm sapsys 3.6G May  8 08:35 bkp-SYS_databackup_1_1
    Copy to Clipboard Toggle word wrap
  2. Use the HANA command hdbbackupcheck to confirm the sanity of each instance backup file you created:

    rh1adm $ for i in $(ls /hana/log/RH1-backup/*); do hdbbackupcheck $i; done
    ...
    Loaded library 'libhdblivecache'
    Backup '/hana/log/RH1-backup/RH1_databackup_0_1' successfully checked.
    Loaded library 'libhdbcsaccessor'
    ...
    Loaded library 'libhdblivecache'
    Backup '/hana/log/RH1-backup/system_databackup_0_1' successfully checked.
    Loaded library 'libhdbcsaccessor'
    Loaded library 'libhdblivecache'
    Backup '/hana/log/RH1-backup/system_databackup_1_1' successfully checked.
    Copy to Clipboard Toggle word wrap

Troubleshooting

  • The backup fails because the <sid>adm user is not able to write to the target directory:

    * 447: backup could not be completed: [2001003]
    createDirectory(path= '/tmp/RH1-backup/', access= rwxrwxr--, recursive= true):
    Permission denied (rc= 13, 'Permission denied') SQLSTATE: HY000
    Copy to Clipboard Toggle word wrap

    Ensure that the <sid>adm user can create files inside of the target directory you define in the backup command. Fix the permissions, for example using step 2 of the procedure.

  • The backup fails because the target filesystem runs out of space:

    * 447: backup could not be completed: [2110001]
    Generic stream error: $msg$ - , rc=$sysrc$: $sysmsg$.
    Failed to process item 0x00007fc5796e0000 - '<root>/.bkp-RH1_databackup_3_1'
    ((open, mode= W, file_access= rw-r-----, flags= ASYNC|DIRECT|TRUNCATE|UNALIGNED_SIZE,
    size= 4096), factory= (root= '/tmp/RH1-backup/' (root_access= rwxr-x---,
    flags= AUTOCREATE_PATH|DISKFULL_ERROR, usage= DATA_BACKUP, fs= xfs, config=
    (async_write_submit_active=on,async_write_submit_blocks=all,async_read_submit=on,num_submit_queues=1,num_completion_queues=1,size_kernel_io_queue=512,max_parallel_io_requests=64,min_submit_batch_size=16,max_submit_batch_size=64))
    SQLSTATE: HY000
    Copy to Clipboard Toggle word wrap

    Check the free space of the filesystem on which the target directory is located. Increase the filesystem size or choose a different path with enough free space available for the backup files.

Enable the HANA system replication on the system which you plan to be the initial primary instance of your planned system replication setup.

Prerequisites

Procedure

  1. Enable the system replication on the HANA instance that becomes the initial primary. Run the command as <sid>adm on the first, or primary, node:

    rh1adm $ hdbnsutil -sr_enable --name=<DC1>
    nameserver is active, proceeding ...
    successfully enabled system as system replication source site
    done.
    Copy to Clipboard Toggle word wrap
    • Replace <DC1> with your primary HANA site name.

Verification

  • Check the system replication status as <sid>adm, and verify that it shows the current node as mode: primary, and that site id and site name are populated with the primary instance site information:

    rh1adm $ hdbnsutil -sr_stateConfiguration
    System Replication State
    ~~~~~~~~
    
    mode: primary
    site id: 1
    site name: DC1
    done.
    Copy to Clipboard Toggle word wrap

You must register the secondary HANA instance to the primary instance to complete the setup of the HANA system replication.

Prerequisites

  • You have installed SAP HANA on the secondary node using the same SID and instance number as the primary instance.
  • You have configured root ssh keys between the cluster nodes.
  • You have opened 2 terminals on the secondary node, one for the root user and one for the <sid>adm user.

Procedure

  1. Stop the secondary HANA instance and run as the <sid>adm user on node2:

    rh1adm $ HDB stop
    …
    Stop
    OK
    Waiting for stopped instance using: /usr/sap/<SID>/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr <instance> -function WaitforStopped 600 2
    …
    
    WaitforStopped
    OK
    hdbdaemon is stopped.
    Copy to Clipboard Toggle word wrap
  2. Change to the directory where HANA stores the keys of the system replication encryption:

    [root]# cd /usr/sap/<SID>/SYS/global/security/rsecssfs
    Copy to Clipboard Toggle word wrap
  3. Copy the HANA system PKI file SSFS_<SID>.KEY from the primary HANA instance to the secondary instance:

    [root]# rsync -av node1:$PWD/key/SSFS_<SID>.KEY key/
    Copy to Clipboard Toggle word wrap
  4. Copy the PKI file SSFS_<SID>.DAT from the primary HANA instance to the secondary instance:

    [root]# rsync -av node1:$PWD/data/SSFS_<SID>.DAT data/
    Copy to Clipboard Toggle word wrap
  5. Register the secondary HANA instance to the primary instance, in the <sid>adm user terminal:

    rh1adm $ hdbnsutil -sr_register --remoteHost=node1 \
    --remoteInstance=${TINSTANCE} --replicationMode=sync \
    --operationMode=logreplay --name=<DC2>
    adding site ...
    collecting information ...
    updating local ini files ...
    done.
    Copy to Clipboard Toggle word wrap
    • Replace <DC2> with your secondary HANA site name.
    • Choose the values for replicationMode and operationMode according to your requirements for the system replication.
    • TINSTANCE is an environment variable that is set automatically for user <sid>adm by reading the HANA instance profile. The variable value is the HANA instance number.
  6. Start the secondary HANA instance. Run as <sid>adm on node2:

    rh1adm $ HDB start
    
    ...
    Starting instance using: /usr/sap/<SID>/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr <instance> -function StartWait 2700 2
    
    ...
    Copy to Clipboard Toggle word wrap

Verification

  1. Check that the system replication is running on the secondary instance and the mode matches the value you used for the replicationMode parameter in the hdbnsutil -sr_register command in step 5. Run as <sid>adm on node2:

    rh1adm $ hdbnsutil -sr_stateConfiguration
    
    System Replication State
    ~~~~~~~~
    
    mode: sync
    site id: 2
    site name: DC2
    active primary site: 1
    
    primary masters: node1
    done.
    Copy to Clipboard Toggle word wrap
  2. Change to the Python script directory of the HANA instance as user <sid>adm, on both instances. The easiest way to do this is to use cdpy, which is an alias built into the <sid>adm user shell that SAP HANA populates during the instance installation:

    rh1adm $ cdpy
    Copy to Clipboard Toggle word wrap

    In our example this command changes the current directory to /usr/sap/RH1/HDB02/exe/python_support/.

  3. Show the current status of the established HANA system replication, on both instances.

    1. On the primary instance the system replication status is always displayed with all details:

      rh1adm $ python systemReplicationStatus.py
      |Database |Host     |Port  |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary     |Replication |Replication |Replication    |Secondary    |
      |         |         |      |             |          |        |          |Host      |Port      |Site ID   |Site Name |Active Status |Mode        |Status      |Status Details |Fully Synced |
      |-------- |-------- |----- |------------ |--------- |------- |--------- |----------|--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
      |SYSTEMDB |node1    |30001 |nameserver   |        1 |      1 |DC1       |node2     |    30001 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      |RH1      |node1    |30007 |xsengine     |        2 |      1 |DC1       |node2     |    30007 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      |RH1      |node1    |30003 |indexserver  |        3 |      1 |DC1       |node2     |    30003 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      
      status system replication site "2": ACTIVE
      overall system replication status: ACTIVE
      
      Local System Replication State
      ~~~~~~~~~~
      
      mode: PRIMARY
      site id: 1
      site name: DC1
      Copy to Clipboard Toggle word wrap
      • ACTIVE means that the HANA system replication is in a healthy state and fully synced.
      • SYNCING is displayed while the system replication is being updated on the secondary site, for example after a takeover of the secondary instance to become the new primary.
      • INITIALIZING is shown after the system replication has first been enabled or a full sync has been triggered.
    2. On the secondary instance, the output of systemReplicationStatus.py is less detailed:

      rh1adm $ python systemReplicationStatus.py
      this system is either not running or not primary system replication site
      
      Local System Replication State
      ~~~~~~~~~~
      
      mode: SYNC
      site id: 2
      site name: DC2
      active primary site: 1
      primary masters: node1
      Copy to Clipboard Toggle word wrap

4.5. Testing the HANA system replication

We recommend that you test the HANA system replication thoroughly before you proceed with the cluster setup. The verification of the correct system replication behavior can help to prevent unexpected results when the HA cluster manages the system replication afterwards.

Use timeout values in the cluster resource configuration that cover the measured times of the different tests to ensure that cluster resource operations do not time out prematurely.

You can also test different parameter values in the HANA configuration to optimize the performance by measuring the time that certain activities take when performed manually outside of cluster control.

Perform the tests with realistic data loads and sizes.

Full replication

  • How long does the synchronization take after the newly registered secondary is started until it is in sync with the primary?
  • Are there parameters which can improve the synchronization time?

Lost connection

  • How long does it take after the connection was lost between the primary and the secondary instance, until they are in sync again?
  • Are there parameters which can improve the reconnection and sync times?

Takeover

  • How long does the secondary instance take to be fully available after a takeover from the primary?
  • What is the time difference between a normal takeover and a "takeover with handshake"?
  • Are there parameters which can improve the takeover time?

Data consistency

  • Is the data you create available and correct after you perform a takeover?

Client reconnect

  • Can the client reconnect to the new primary instance after a takeover?
  • How long does it take for the client to access the new primary after a takeover?

Primary becomes secondary

  • How long does it take a former primary until it is in sync again with the new primary, after it is registered as a new secondary?
  • If configured, how long does it take until a client can access the newly registered secondary for read operations?
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat