Search

Chapter 2. Requirements

download PDF

2.1. Subscription

It’s important to keep the subscription, kernel, and patch level identical on all cluster nodes.

To be able to use this HA solution, either the RHEL for SAP Solutions (for on-premise or BYOS setups in public cloud environments) or RHEL for SAP with High Availability and Update Services (when using PAYG in public cloud environments) subscriptions are required for all cluster nodes. And the SAP NetWeaver, SAP Solutions and High Availability repos must be enabled on each cluster node.

Follow this kbase article to enable the repos on both nodes which are required for this environment.

2.2. Pacemaker Resource Agents

For a pacemaker-based HA cluster to manage both SAP HANA System Replication and also ENSA2 the following resource agents are required.

2.2.1. SAPInstance

The SAPInstance resource agent will be used for managing the ASCS and ERS resources in this example. All operations of the SAPInstance resource agent are done by using the SAP start-up service framework sapstartsrv.

2.2.2. SAPHanaTopology (Cloned Resource)

This resource agent is is gathering status and configuration of SAP HANA System Replication on each cluster node. It is essential to have the data from this agent present in cluster node attributes for SAPHana resource agent to work properly.

2.2.3. SAPHana (Promotable Cloned resource)

This resource is responsible for starting, stopping, and relocating (failover) of SAP HANA database. This resource agent takes information gathered by SAPHanaTopology and based on that it is interacting with SAP HANA database to do things. It also adds additional information to cluster node attributes about SAP HANA status on cluster nodes.

2.2.4. filesystem

Pacemaker cluster resource-agent for Filesystem. It manages a Filesystem on a shared storage medium exported by NFS or iSCSI etc.

2.2.5. IPaddr2 (or other RAs for managing VIPs on CCSPs)

Manages virtual IPv4 and IPv6 addresses and aliases.

2.3. Two node cluster environment

Since this is a Cost-Optimized scenario, we will focus only on a 2-node cluster environment. ENSA1 can only be configured in a 2-node cluster where the ASCS can failover to the other node where the ERS is running. ENSA2, on the other hand, supports running more than 2 nodes in a cluster; however, SAP HANA scale-up instances are limited to 2-node clusters only, therefore, this Cost-Optimized document keeps everything simple by using only 2 nodes in the cluster.

2.4. Storage requirements

Directories created for S/4HANA installation should be put on shared storage, following the below-mentioned rules:

2.4.1. Instance Specific Directory

There must be a separate SAN LUN or NFS export for the ASCS and ERS instances that can be mounted by the cluster on each node.

For example, as shown below, 'ASCS' and 'ERS' instances, respectively, the instance specific directory must be present on the corresponding node.

  • ASCS node: /usr/sap/SID/ASCS<Ins#>
  • ERS node: /usr/sap/SID/ERS<Ins#>
  • Both nodes: /hana/

    • Note: As there will be System Replication, the /hana/ directory is local that is non-shared on each node.

Note: For the Application Servers, the following directory must be made available on the nodes where the Application Server instances will run:

  • App Server Node(s) (D<Ins#>): /usr/sap/SID/D<Ins#>

When using SAN LUNs for the instance directories, customers must use HA-LVM to ensure that the instance directories can only be mounted on one node at a time.

When using NFS exports, if the directories are created on the same directory tree on an NFS file server, such as Azure NetApp Files or Amazon EFS, the option force_unmount=safe must be used when configuring the Filesystem resource. This option will ensure that the cluster only stops the processes running on the specific NFS export instead of stopping all processes running on the directory tree where the exports are created.

2.4.2. Shared Directories

The following mount points must be available on ASCS, ERS, HANA, and Application Servers nodes.

/sapmnt
/usr/sap/trans
/usr/sap/SID/SYS

Shared storage can be achieved by:

These mount points must be either managed by the cluster or mounted before the cluster is started.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.