此内容没有您所选择的语言版本。

Chapter 4. Configuring the SAP HANA system replication


You must configure and test the SAP HANA system replication before you can configure the HANA instance in a cluster. Follow the SAP guidelines for the HANA system replication setup: SAP HANA System Replication: Configuration.

SAP HANA configuration

SAP HANA must be installed and configured identically on the system replication sites.

Host name resolution

All hosts must be able to resolve the host names and fully qualified domain names (FQDN) of all HANA systems. To ensure that all host names can be resolved even without DNS you can place them into /etc/hosts. This is also recommended for hosts configured in HA clusters in general.

In addition you can manage host name resolution in SAP HANA internally. For more details see Internal Host Name Resolution and Host Name Resolution for System Replication. Ensure that the HANA internal host names and /etc/hosts entries are consistent.

Note

As documented at hostname | SAP Help Portal, SAP HANA only supports hostnames with lowercase characters.

SAP HANA log_mode

For the system replication to work, you must set the SAP HANA log_mode variable to normal, which is the default value.

Verify the current log_mode as the HANA administrative user <sid>adm on both nodes:

rh1adm $ hdbsql -u system -i ${TINSTANCE} \
"select value from "SYS"."M_INIFILE_CONTENTS" where key='log_mode'"
Password: <HANA_SYSTEM_PASSWORD>
VALUE "normal"
1 row selected

4.2. Performing an initial HANA database backup

You can only enable the HANA system replication when an initial backup of the SAP HANA database exists on the primary site for the planned SAP HANA system replication setup.

You can use SAP HANA tools to create the backup and skip the manual procedure. See SAP HANA Administration Guide - SAP HANA Database Backup and Recovery for more information.

Prerequisites

  • You have a writable directory to which the backup files are saved for the SAP HANA administrative user <sid>adm.
  • You have sufficient free space available in the filesystem on which the backup files are stored.

Procedure

  1. Optional: Create a dedicated directory for the backup in a suitable path, for example:

    [root]# mkdir <path>/<SID>-backup

    Replace <path> with a path on your system, which has enough free space for the initial backup files.

  2. Change the owner of the backup path to user <sid>adm if the target directory is not already owned or writable by the HANA user, for example:

    [root]# chown <sid>adm:sapsys <path>/<SID>-backup
  3. Change to the <sid>adm user for the remaining steps:

    [root]# su - <sid>adm
  4. Create a backup of the SYSTEMDB as the <sid>adm user. Specify the path to the files the backups will be stored in. Ensure that the target filesystem has enough free space left, then create the backup:

    rh1adm $ hdbsql -i ${TINSTANCE} -u system -d SYSTEMDB \
    "BACKUP DATA USING FILE ('<path>/${SAPSYSTEMNAME}-backup/bkp-SYS')"
    Password: <HANA_SYSTEM_PASSWORD>
    0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)
    • $TINSTANCE and $SAPSYSTEMNAME are environment variables that are part of the <sid>adm user shell environment. $TINSTANCE is the instance number and $SAPSYSTEMNAME is the SID. Both are automatically set to the instance values related to the <sid>adm user.
    • Replace <path> with the path where the <sid>adm user has write access and where there is enough free space left.
  5. Create a backup of all tenant databases as the <sid>adm user. Specify the path to the files the backups will be stored in. Ensure that the target filesystem has enough free space left. Create the tenant DB backup:

    rh1adm $ hdbsql -i ${TINSTANCE} -u system -d SYSTEMDB \
    "BACKUP DATA FOR ${SAPSYSTEMNAME} USING FILE ('<path>/${SAPSYSTEMNAME}-backup/bkp-${SAPSYSTEMNAME}')"
    Password: <HANA_SYSTEM_PASSWORD>
    0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)

    Replace <path> with the path where the <sid>adm user has write access and where there is enough free space left.

Verification

  1. List the resulting backup files. Example when using /hana/log/RH1-backup as the directory to store the initial backup:

    rh1adm $ ls -lh /hana/log/RH1-backup/
    total 7.3G
    -rw-r-----. 1 rh1adm sapsys 156K Sep 15 10:58 bkp-RH1_databackup_0_1
    -rw-r-----. 1 rh1adm sapsys  81M Sep 15 10:58 bkp-RH1_databackup_2_1
    -rw-r-----. 1 rh1adm sapsys 3.6G Sep 15 10:58 bkp-RH1_databackup_3_1
    -rw-r-----. 1 rh1adm sapsys  81M Sep 15 10:58 bkp-RH1_databackup_4_1
    -rw-r-----. 1 rh1adm sapsys 164K Sep 15 10:56 bkp-SYS_databackup_0_1
    -rw-r-----. 1 rh1adm sapsys 3.6G Sep 15 10:57 bkp-SYS_databackup_1_1
  2. Use the HANA command hdbbackupcheck to confirm the sanity of each backup file you created:

    rh1adm $ for i in $(ls /hana/log/RH1-backup/*); do hdbbackupcheck $i; done
    ...
    Loaded library 'libhdblivecache'
    Backup '/hana/log/RH1-backup/RH1_databackup_0_1' successfully checked.
    Loaded library 'libhdbcsaccessor'
    ...
    Loaded library 'libhdblivecache'
    Backup '/hana/log/RH1-backup/system_databackup_0_1' successfully checked.
    Loaded library 'libhdbcsaccessor'
    Loaded library 'libhdblivecache'
    Backup '/hana/log/RH1-backup/system_databackup_1_1' successfully checked.

Troubleshooting

  • The backup fails when the <sid>adm user is not able to write to the target directory:

    * 447: backup could not be completed: [2001003]
    createDirectory(path= '/tmp/RH1-backup/', access= rwxrwxr--, recursive= true):
    Permission denied (rc= 13, 'Permission denied') SQLSTATE: HY000

    Ensure that the <sid>adm user can create files inside of the target directory you define in the backup command. Fix the permissions, for example using step 2 of the procedure.

  • The backup fails because the target filesystem runs out of space:

    * 447: backup could not be completed: [2110001]
    Generic stream error: $msg$ - , rc=$sysrc$: $sysmsg$.
    Failed to process item 0x00007fc5796e0000 - '<root>/.bkp-RH1_databackup_3_1'
    ((open, mode= W, file_access= rw-r-----, flags= ASYNC|DIRECT|TRUNCATE|UNALIGNED_SIZE,
    size= 4096), factory= (root= '/tmp/RH1-backup/' (root_access= rwxr-x---,
    flags= AUTOCREATE_PATH|DISKFULL_ERROR, usage= DATA_BACKUP, fs= xfs, config=
    (async_write_submit_active=on,async_write_submit_blocks=all,async_read_submit=on,num_submit_queues=1,num_completion_queues=1,size_kernel_io_queue=512,max_parallel_io_requests=64,min_submit_batch_size=16,max_submit_batch_size=64))
    SQLSTATE: HY000

    Check the free space of the filesystem on which the target directory is located. Increase the filesystem size or choose a different path with enough free space available for the backup files.

4.3. Configuring the primary HANA replication site

Enable the HANA system replication on a system which you plan to be the initial primary site of your planned system replication setup.

Prerequisites

Procedure

  1. Enable the system replication on the HANA site that becomes the initial primary. Run the command as <sid>adm on the first, or primary, node:

    rh1adm $ hdbnsutil -sr_enable --name=<site1>
    nameserver is active, proceeding ...
    successfully enabled system as system replication source site
    done.
    • Replace <site1> with your primary HANA site name, for example, DC1.

Verification

  • Check the system replication configuration as <sid>adm, and verify that it shows the current node as mode: primary, and that site id and site name are populated with the primary site information:

    rh1adm $ hdbnsutil -sr_stateConfiguration
    System Replication State
    ~~~~~~~~
    
    mode: primary
    site id: 1
    site name: DC1
    done.

4.4. Configuring the secondary HANA replication site

You must register the secondary HANA site to the primary site to complete the setup of the HANA system replication.

Prerequisites

  • You have installed SAP HANA on the secondary nodes using the same SID and instance number as the primary instances.
  • You have configured SSH public-key access for the root user between the cluster nodes.
  • You have configured the firewall rules in your network infrastructure or on each host operating system to allow the connections that your HANA landscape requires for the system replication connection.
  • You have opened 2 terminals on a secondary node, for example, on dc2hana1 with one terminal for the root user and one for the <sid>adm user.

Procedure

  1. Stop the secondary HANA instances. Run as the <sid>adm user on one secondary instance, for example, on dc2hana1:

    rh1adm $ sapcontrol -nr ${TINSTANCE} -function StopSystem HDB
  2. Run this and the following steps only on one node on the secondary site, for example, on dc2hana1. Change to the directory on the shared filesystem where HANA stores the keys of the system replication encryption on one node of the secondary site:

    [root]# cd /hana/shared/<SID>/global/security/rsecssfs
  3. Copy the HANA system PKI file SSFS_<SID>.KEY from the primary HANA site to the secondary site on one secondary node:

    [root]# rsync -av <node1>:$PWD/key/SSFS_<SID>.KEY key/
    • Replace <node1> with a primary node, for example, dc1hana1.
    • Replace <SID> with your HANA SID, for example, RH1.
  4. Copy the PKI file SSFS_<SID>.DAT from the primary HANA site to the secondary site on one secondary node in the same way as the previous step:

    [root]# rsync -av <node1>:$PWD/data/SSFS_<SID>.DAT data/
  5. Register the secondary HANA site to the primary site. Run this in the <sid>adm user terminal on a secondary node:

    rh1adm $ hdbnsutil -sr_register --remoteHost=<node1> \
    --remoteInstance=${TINSTANCE} --replicationMode=sync \
    --operationMode=logreplay --name=<site2>
    adding site ...
    collecting information ...
    updating local ini files ...
    done.
    • Replace <node1> with a primary node, for example, dc1hana1.
    • Replace <site2> with your secondary HANA site name, for example, DC2.
    • Choose the values for replicationMode and operationMode according to your requirements for the system replication.
    • $TINSTANCE is an environment variable that is set automatically for user <sid>adm by reading the HANA instance profile. The variable value is the HANA instance number.
  6. Start the secondary HANA instances. Run as <sid>adm on one HANA instance on the secondary site:

    rh1adm $ sapcontrol -nr ${TINSTANCE} -function StartSystem HDB

Verification

  1. Check that the system replication is running on the secondary site and the mode matches the value you used for the replicationMode parameter in the hdbnsutil -sr_register command. Run as <sid>adm on one node on the secondary site, for example, dc2hana1:

    rh1adm $ hdbnsutil -sr_stateConfiguration
    
    System Replication State
    ~~~~~~~~
    
    mode: sync
    site id: 2
    site name: DC2
    active primary site: 1
    
    primary masters: dc1hana1
  2. Change to the Python script directory of the HANA instance as user <sid>adm, on both sites. The easiest way to do this is to use cdpy, which is an alias built into the <sid>adm user shell that SAP HANA populates during the instance installation:

    rh1adm $ cdpy

    In our example this command changes the current directory to /usr/sap/RH1/HDB02/exe/python_support/.

  3. Show the current status of the established HANA system replication, on both HANA sites.

    1. On the primary site the system replication status is always displayed with all details:

      rh1adm $ python systemReplicationStatus.py
      |Database |Host     |Port  |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary     |Replication |Replication |Replication    |Secondary    |
      |         |         |      |             |          |        |          |Host      |Port      |Site ID   |Site Name |Active Status |Mode        |Status      |Status Details |Fully Synced |
      |-------- |-------- |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
      |RH1      |dc1hana2 |30003 |indexserver  |        4 |      1 |DC1       |dc2hana2  |    30003 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      |SYSTEMDB |dc1hana1 |30001 |nameserver   |        1 |      1 |DC1       |dc2hana1  |    30001 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      |RH1      |dc1hana1 |30007 |xsengine     |        2 |      1 |DC1       |dc2hana1  |    30007 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      |RH1      |dc1hana1 |30003 |indexserver  |        3 |      1 |DC1       |dc2hana1  |    30003 |        2 |DC2       |YES           |SYNC        |ACTIVE      |               |        True |
      
      status system replication site "2": ACTIVE
      overall system replication status: ACTIVE
      
      Local System Replication State
      ~~~~~~~~~~
      
      mode: PRIMARY
      site id: 1
      site name: DC1
      • ACTIVE means that the HANA system replication is in a healthy state and fully synced.
      • SYNCING is displayed while the system replication is being updated on the secondary site, for example after a takeover of the secondary site to become the new primary.
      • INITIALIZING is shown after the system replication has first been enabled or a full sync has been triggered.
    2. On the secondary site, the output of systemReplicationStatus.py is less detailed. Check the status on one secondary node:

      rh1adm $ python systemReplicationStatus.py
      this system is either not running or not primary system replication site
      
      Local System Replication State
      ~~~~~~~~~~
      
      mode: SYNC
      site id: 2
      site name: DC2
      active primary site: 1
      primary masters: dc1hana1

4.5. Testing the HANA system replication

We recommend that you test the HANA system replication thoroughly before you proceed with the cluster setup. The verification of the correct system replication behavior can help to prevent unexpected results when the HA cluster manages the system replication afterwards.

Use timeout values in the cluster resource configuration that cover the measured times of the different tests to ensure that cluster resource operations do not time out prematurely.

You can also test different parameter values in the HANA configuration to optimize the performance by measuring the time that certain activities take when performed manually outside of cluster control.

Perform the tests with realistic data loads and sizes.

Full replication

  • How long does the synchronization take after the newly registered secondary is started until it is in sync with the primary?
  • Are there parameters which can improve the synchronization time?

Lost connection

  • How long does it take after the connection was lost between the primary and the secondary site, until they are in sync again?
  • Are there parameters which can improve the reconnection and sync times?

Takeover

  • How long does the secondary site take to be fully available after a takeover from the primary?
  • What is the time difference between a normal takeover and a "takeover with handshake"?
  • Are there parameters which can improve the takeover time?

Data consistency

  • Is the data you create available and correct after you perform a takeover?

Client reconnect

  • Can the client reconnect to the new primary site after a takeover?
  • How long does it take for the client to access the new primary after a takeover?

Primary becomes secondary

  • How long does it take a former primary until it is in sync again with the new primary, after it is registered as a new secondary?
  • If configured, how long does it take until a client can access the newly registered secondary for read operations?
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部