Search

Chapter 2. Requirements

download PDF

2.1. Subscriptions and repositories

It is important to keep the subscription, kernel, and patch level identical on all cluster nodes and to ensure that the correct repositories are enabled.

Check out the following documentation for guidelines on how to enable the required subscriptions and repositories for running SAP NetWeaver or SAP S/4HANA application servers on RHEL 8 and have them managed by the RHEL HA Add-On: RHEL for SAP Subscriptions and Repositories.

2.2. Storage requirements

The directories used by a SAP S/4HANA installation that is managed by the cluster must be set up according to the guidelines provided by SAP. See SAP Directories for more information.

2.2.1. Local directories

As per SAP’s guidance, the /usr/sap/, /usr/sap/SYS/, and /usr/sap/<SAPSID>/ directories should be created locally on each node. While /usr/sap/ will contain some additional files and directories after the installation of the SAP system that are specific to the node (for example, /usr/sap/sapservices, and /usr/sap/hostctrl), /usr/sap/SYS/ only contains symlinks to other files and directories, and /usr/sap/<SAPSID>/ is primarily used as a mountpoint for the instance-specific directories.

2.2.2. Instance Specific Directories

For the (A)SCS, ERS, and any other application server instance that is managed by the cluster, the instance-specific directory must be created on a separate SAN LUN or NFS export that can be mounted by the cluster as a local directory on the node where an instance is supposed to be running. For example:

  • (A)SCS: /usr/sap/<SAPSID>/ASCS<Ins#>/
  • ERS: /usr/sap/<SAPSID>/ERS<Ins#>/
  • App Server: /usr/sap/<SAPSID>/D<Ins#>/

The cluster configuration must include resources for managing the filesystems for the instance directories as part of the resource group that is used to manage the instance and the virtual IP so that the cluster can automatically mount the filesystem on the node where the instance should be running.

When using SAN LUNs for instance-specific directories, customers must use HA-LVM to ensure that the instance directories can only be mounted on one node at a time.

The resources for managing the logical volumes (if SAN LUNS are used) and the filesystems must always be configured before the resource that is used for managing the SAP instance to ensure that the filesystem is mounted when the cluster attempts to start the instance itself.

With the exception of NFS, using a shared file system (for example, GFS2) to host all the instance-specific directories and make them available on all cluster nodes at the same time is not supported for the solution described in this document.

When using NFS exports for specific directories, if the directories are created on the same directory tree on an NFS file server, such as Azure NetApp Files (ANF) or Amazon EFS, the option force_unmount=safe must be used when configuring the Filesystem resource. This option will ensure that the cluster only stops the processes running on the specific NFS export instead of stopping all processes running on the directory tree where the exports have been created (see During failover of a pacemaker resource, a Filesystem resource kills processes not using the filesystem for more information).

2.2.3. Shared Directories

The following directories must be available on all servers running SAP instances of an SAP system:

  • /sapmnt/
  • /usr/sap/trans/

The /sapmnt/ directory must also be accessible on all other servers that are running services that are part of the SAP system (for example, the servers hosting the HANA DB instances or servers hosting additional application servers not managed by the cluster).

To share the /sapmnt/ and /usr/sap/trans/ directories between all the servers hosting services of the same SAP system, either one of the following methods can be used:

The shared directories can either be statically mounted via /etc/fstab or the mounts can be managed by the cluster (in this case, it must be ensured that the cluster mounts the /sapmnt/ directory on the cluster nodes before attempting to start any SAP instances by setting up appropriate constraints).

2.3. Fencing/STONITH

As documented at Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH, a working Fencing/STONITH device must be enabled on each cluster node in order for an HA cluster setup using the RHEL HA Add-on to be fully supported.

Which Fencing/STONITH device to use depends on the platform the cluster is running on. Please check out the Fencing/STONITH section in the Support Policies for RHEL High Availability Clusters for recommendations on fencing agents, or consult with your hardware or cloud provider to find out which fence device to use on their platform.

Note

Using fence_scsi/fence_mpath as the fencing device for HA cluster setups for managing SAP NetWeaver/S/4HANA application server instances is not a supported option since, as documented in Support Policies for RHEL High Availability Clusters - fence_scsi and fence_mpath these fence devices can only be used for cluster setups that manage shared storage, which is simultaneously accessed by multiple clients for reading and writing. Since the main purpose of a HA cluster for managing SAP NetWeaver/S/4HANA is to manage the SAP application server instances and not the shared directories that are needed in such environments, using fence_scsi/fence_mpath could result in the SAP instances not being stopped in case a node needs to be fenced (since fence_scsi/fence_mpath normally only block access to the storage devices managed by the cluster).

2.4. Quorum

While pacemaker provides some built-in mechanisms to determine if a cluster is quorate or not, in some situations it might be desirable to add additional “quorum devices” in the cluster setup to help the cluster determine which side of the cluster should stay up and running in case a “split-brain” situation occurs.

For HA cluster setups that are used for managing SAP Application server instances, a quorum device is not required by default, but it is possible to add quorum devices to such setups if needed.

The options for setting up quorum devices vary depending on the configuration. Please review the following guidelines for more information:

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.