Appendix C. HA Resource Parameters
This appendix provides descriptions of HA resource parameters. You can configure the parameters with Luci,
system-config-cluster
, or by editing etc/cluster/cluster.conf
. Table C.1, “HA Resource Summary” lists the resources, their corresponding resource agents, and references to other tables containing parameter descriptions. To understand resource agents in more detail you can view them in /usr/share/cluster
of any cluster node.
For a comprehensive list and description of
cluster.conf
elements and attributes, refer to the cluster schema at /usr/share/system-config-cluster/misc/cluster.ng
, and the annotated schema at /usr/share/doc/system-config-cluster-X.Y.ZZ/cluster_conf.html
(for example /usr/share/doc/system-config-cluster-1.0.57/cluster_conf.html
).
Resource | Resource Agent | Reference to Parameter Description |
---|---|---|
Apache | apache.sh | Table C.2, “Apache Server” |
File System | fs.sh | Table C.3, “File System” |
GFS File System | clusterfs.sh | Table C.4, “GFS” |
IP Address | ip.sh | Table C.5, “IP Address” |
LVM | lvm.sh | Table C.6, “LVM” |
MySQL | mysql.sh | Table C.7, “MySQL” |
NFS Client | nfsclient.sh | Table C.8, “NFS Client” |
NFS Export | nfsexport.sh | Table C.9, “NFS Export” |
NFS Mount | netfs.sh | Table C.10, “NFS Mount” |
Open LDAP | openldap.sh | Table C.11, “Open LDAP” |
Oracle 10g Failover Instance | oracledb.sh | Table C.12, “Oracle 10g Failover Instance” |
Oracle DB Agent | orainstance.sh | Table C.13, “Oracle DB” |
Oracle Listener Agent | oralistener.sh | Table C.14, “Oracle Listener Agent” |
PostgreSQL 8 | postgres-8.sh | Table C.15, “PostgreSQL 8” |
SAP Database | SAPDatabase | Table C.16, “SAP Database” |
SAP Instance | SAPInstance | Table C.17, “SAP Instance” |
Samba | smb.sh | Table C.18, “Samba Service” |
Script | script.sh | Table C.19, “Script” |
Service | service.sh | Table C.20, “Service” |
Sybase ASE | ASEHAagent.sh | Table C.21, “Sybase ASE Failover Instance” |
Tomcat 5 | tomcat-5.sh | Table C.22, “Tomcat 5” |
Virtual Machine | vm.sh | Table C.23, “Virtual Machine”
NOTE: Luci displays this as a virtual service if the host cluster can support virtual machines.
|
Field | Description |
---|---|
Name | The name of the Apache Service. |
Server Root | The default value is /etc/httpd . |
Config File | Specifies the Apache configuration file. The default valuer is /etc/httpd/conf . |
httpd Options | Other command line options for httpd . |
Shutdown Wait (seconds) | Specifies the number of seconds to wait for correct end of service shutdown. |
Field | Description |
---|---|
Name | Specifies a name for the file system resource. |
File system type | If not specified, mount tries to determine the file system type. |
Mount point | Path in file system hierarchy to mount this file system. |
Device | Specifies the device associated with the file system resource. This can be a block device, file system label, or UUID of a file system. |
Options | Mount options; that is, options used when the file system is mounted. These may be file-system specific. Refer to the mount (8) man page for supported mount options. |
File system ID | Note File System ID is used only by NFS services.
When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you commit the parameter during configuration. If you need to assign a file system ID explicitly, specify it in this field.
|
Force unmount | If enabled, forces the file system to unmount. The default setting is disabled . Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. |
Reboot host node if unmount fails | If enabled, reboots the node if unmounting this file system fails. The default setting is disabled . |
Check file system before mounting | If enabled, causes fsck to be run on the file system before mounting it. The default setting is disabled . |
Enable NFS daemon and lockd workaround | If your filesystem is exported via NFS and occasionally fails to unmount (either during shutdown or service relocation), setting this option will drop all filesystem references prior to the unmount operation. Setting this option requires that you enable the Section 3.9, “Adding a Cluster Service to the Cluster”. | option. You should set this option as a last resort only, as this is a hard attempt to unmount a file system. You can enable NFS lock workarounds in a soft attempt to unmount a file system at the level of cluster service configuration, as described in
Field | Description |
---|---|
Name | The name of the file system resource. |
Mount point | The path to which the file system resource is mounted. |
Device | The device file associated with the file system resource. |
File system type | Specify GFS or GFS2. |
Options | Mount options. |
File system ID | Note File System ID is used only by NFS services.
When creating a new GFS resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you commit the parameter during configuration. If you need to assign a file system ID explicitly, specify it in this field.
|
Force unmount | If enabled, forces the file system to unmount. The default setting is disabled . Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. With GFS resources, the mount point is not unmounted at service tear-down unless Force Unmount is enabled. |
Reboot host node if unmount fails | If enabled and unmounting the file system fails, the node will immediately reboot. Generally, this is used in conjunction with force-unmount support, but it is not required. |
Enable NFS daemon and lockd workaround | If your filesystem is exported via NFS and occasionally fails to unmount (either during shutdown or service relocation), setting this option will drop all filesystem references prior to the unmount operation. Setting this option requires that you enable the Section 3.9, “Adding a Cluster Service to the Cluster”. | option. You should set this option as a last resort only, as this is a hard attempt to unmount a file system. You can enable NFS lock workarounds in a soft attempt to unmount a file system at the level of cluster service configuration, as described in
Field | Description |
---|---|
IP address | The IP address for the resource. This is a virtual IP address. IPv4 and IPv6 addresses are supported, as is NIC link monitoring for each IP address. |
Monitor link | Enabling this causes the status check to fail if the link on the NIC to which this IP address is bound is not present. |
Field | Description |
---|---|
Name | A unique name for this LVM resource. |
Volume Group Name | A descriptive name of the volume group being managed. |
Logical Volume Name | Name of the logical volume being managed. This parameter is optional if there is more than one logical volume in the volume group being managed. |
Fence the node if it is unable to clean up LVM tags | Fence the node if it is unable to clean up LVM tags. |
Field | Description |
---|---|
Name | Specifies a name of the MySQL server resource. |
Config File | Specifies the configuration file. The default value is /etc/my.cnf . |
Listen Address | Specifies an IP address for MySQL server. If an IP address is not provided, the first IP address from the service is taken. |
mysqld Options | Other command line options for httpd . |
Shutdown Wait (seconds) | Specifies the number of seconds to wait for correct end of service shutdown. |
Field | Description |
---|---|
Name | This is a symbolic name of a client used to reference it in the resource tree. This is not the same thing as the Target option. |
Target | This is the server from which you are mounting. It can be specified using a hostname, a wildcard (IP address or hostname based), or a netgroup defining a host or hosts to export to. |
Options | Defines a list of options for this client — for example, additional client access rights. For more information, refer to the exports (5) man page, General Options. |
Allow Recover | Allow recovery of the NFS client. |
Field | Description |
---|---|
Name |
Descriptive name of the resource. The NFS Export resource ensures that NFS daemons are running. It is fully reusable; typically, only one NFS Export resource is needed.
Note
Name the NFS Export resource so it is clearly distinguished from other NFS resources.
|
Field | Description |
---|---|
Name |
Symbolic name for the NFS mount.
Note
This resource is required only when a cluster service is configured to be an NFS client.
|
Mount point | Path to which the file system resource is mounted. |
Host | NFS server IP address or hostname. |
Export path | NFS Export directory name. |
NFS version |
NFS protocol:
|
Options | Mount options. Specifies a list of mount options. If none are specified, the NFS file system is mounted -o sync . For more information, refer to the nfs (5) man page. |
Force unmount | If Force unmount is enabled, the cluster kills all processes using this file system when the service is stopped. Killing all processes using the file system frees up the file system. Otherwise, the unmount will fail, and the service will be restarted. |
Field | Description |
---|---|
Name | Specifies a service name for logging and other purposes. |
Config File | Specifies an absolute path to a configuration file. The default value is /etc/openldap/slapd.conf . |
URL List | The default value is ldap:/// . |
slapd Options | Other command line options for slapd . |
Shutdown Wait (seconds) | Specifies the number of seconds to wait for correct end of service shutdown. |
Field | Description |
---|---|
Instance name (SID) of Oracle instance | Instance name. |
Oracle user name | This is the user name of the Oracle user that the Oracle AS instance runs as. |
Oracle application home directory | This is the Oracle (application, not user) home directory. It is configured when you install Oracle. |
Virtual hostname (optional) | Virtual Hostname matching the installation hostname of Oracle 10g. Note that during the start/stop of an oracledb resource, your hostname is changed temporarily to this hostname. Therefore, you should configure an oracledb resource as part of an exclusive service only. |
Field | Description |
---|---|
Instance name (SID) of Oracle instance | Instance name. |
Oracle user name | This is the user name of the Oracle user that the Oracle instance runs as. |
Oracle application home directory | This is the Oracle (application, not user) home directory. It is configured when you install Oracle. |
List of Oracle listeners (optional, separated by spaces) | List of Oracle listeners which will be started with the database instance. Listener names are separated by whitespace. Defaults to empty which disables listeners. |
Path to lock file (optional) | Location for lockfile which will be used for checking if the Oracle should be running or not. Defaults to location under /tmp . |
Field | Description |
---|---|
Listener Name | Listener name. |
Oracle user name | This is the user name of the Oracle user that the Oracle instance runs as. |
Oracle application home directory | This is the Oracle (application, not user) home directory. It is configured when you install Oracle. |
Field | Description |
---|---|
Name | Specifies a service name for logging and other purposes. |
Config File | Define absolute path to configuration file. The default value is /var/lib/pgsql/data/postgresql.conf . |
Postmaster User | User who runs the database server because it cannot be run by root. The default value is postgres. |
Postmaster Options | Other command line options for postmaster. |
Startup Wait (seconds) | Specifies the number of seconds to wait for correct end of service startup. |
Shutdown Wait (seconds) | Specifies the number of seconds to wait for correct end of service shutdown. |
Field | Description |
---|---|
SAP Database Name | Specifies a unique SAP system identifier. For example, P01. |
SAP executable directory | Specifies the fully qualified path to sapstartsrv and sapcontrol . |
Database type | Specifies one of the following database types: Oracle, DB6, or ADA. |
Oracle TNS listener name | Specifies Oracle TNS listener name. |
ABAP stack is not installed, only Java stack is installed | If you do not have an ABAP stack installed in the SAP database, enable this parameter. |
Application Level Monitoring | Activates application level monitoring. |
Automatic Startup Recovery | Enable or disable automatic startup recovery. |
Path to Java SDK | Path to Java SDK. |
File name of the JDBC Driver | File name of the JDBC driver. |
Path to a pre-start script | Path to a pre-start script. |
Path to a post-start script | Path to a post-start script. |
Path to a pre-stop script | Path to a pre-stop script |
Path to a post-stop script | Path to a post-stop script |
J2EE instance bootstrap directory | The fully qualified path the J2EE instance bootstrap directory. For example, /usr/sap/P01/J00/j2ee/cluster/bootstrap . |
J2EE security store path | The fully qualified path the J2EE security store directory. For example, /usr/sap/P01/SYS/global/security/lib/tools . |
Field | Description |
---|---|
SAP Instance Name | The fully qualified SAP instance name. For example, P01_DVEBMGS00_sapp01ci. |
SAP executable directory | The fully qualified path to sapstartsrv and sapcontrol . |
Directory containing the SAP START profile | The fully qualified path to the SAP START profile. |
Name of the SAP START profile | Specifies name of the SAP START profile. |
Number of seconds to wait before checking startup status | Specifies the number of seconds to wait before checking the startup status (do not wait for J2EE-Addin). |
Enable automatic startup recovery | Enable or disable automatic startup recovery. |
Path to a pre-start script | Path to a pre-start script. |
Path to a post-start script | Path to a post-start script. |
Path to a pre-stop script | Path to a pre-stop script |
Path to a post-stop script | Path to a post-stop script |
Note
Regarding Table C.18, “Samba Service”, when creating or editing a cluster service, connect a Samba-service resource directly to the service, not to a resource within a service.
Note
Red Hat Enterprise Linux 5 does not support running Clustered Samba in an active/active configuration, in which Samba serves the same shared file system from multiple nodes. Red Hat Enterprise Linux 5 does support running Samba in a cluster in active/passive mode, with failover from one node to the other nodes in a cluster. Note that if failover occurs, locking states are lost and active connections to Samba are severed so that the clients must reconnect.
Field | Description |
---|---|
Name | Specifies the name of the Samba server. |
Workgroup | Specifies a Windows workgroup name or Windows NT domain of the Samba service. |
Field | Description |
---|---|
Name | Specifies a name for the custom user script. The script resource allows a standard LSB-compliant init script to be used to start a clustered service. |
Full path to script file | Enter the path where this custom script is located (for example, /etc/init.d/userscript ). |
Field | Description |
---|---|
Service name | Name of service. This defines a collection of resources, known as a resource group or cluster service. |
Automatically start this service | If enabled, this service (or resource group) is started automatically after the cluster forms a quorum. If this parameter is disabled, this service is not started automatically after the cluster forms a quorum; the service is put into the disabled state. |
Run exclusive | If enabled, this service (resource group) can only be relocated to run on another node exclusively; that is, to run on a node that has no other services running on it. If no nodes are available for a service to run exclusively, the service is not restarted after a failure. Additionally, other services do not automatically relocate to a node running this service as Run exclusive . You can override this option by manual start or relocate operations. |
Failover Domain | Defines lists of cluster members to try in the event that a service fails. For information on configuring a failover domain with Conga, refer to Section 3.7, “Configuring a Failover Domain”. For information on configuring a failover domain with system-config-cluster , refer to Section 5.6, “Configuring a Failover Domain”. |
Recovery policy | Recovery policy provides the following options:
|
Field | Description |
---|---|
Instance Name | Specifies the instance name of the Sybase ASE resource. |
ASE server name | The ASE server name that is configured for the HA service. |
SYBASE home directory | The home directory of Sybase products. |
Login file | The full path of login file that contains the login-password pair. |
Interfaces file | The full path of the interfaces file that is used to start/access the ASE server. |
SYBASE_ASE directory name | The directory name under sybase_home where ASE products are installed. |
SYBASE_OCS directory name | The directory name under sybase_home where OCS products are installed. For example, ASE-15_0. |
Sybase user | The user who can run ASE server. |
Deep probe timeout | The maximum seconds to wait for the response of ASE server before determining that the server had no response while running deep probe. |
Field | Description |
---|---|
Name | Specifies a service name for logging and other purposes. |
Config File | Specifies the absolute path to the configuration file. The default value is /etc/tomcat5/tomcat5.conf . |
Tomcat User | User who runs the Tomcat server. The default value is tomcat. |
Catalina Options | Other command line options for Catalina. |
Catalina Base | Catalina base directory (differs for each service) The default value is /usr/share/tomcat5. |
Shutdown Wait (seconds) | Specifies the number of seconds to wait for correct end of service shutdown. The default value is 30. |
Important
Regarding Table C.23, “Virtual Machine”, when you configure your cluster with virtual machine resources, you should use the
rgmanager
tools to start and stop the virtual machines. Using xm
or virsh
to start the machine can result in the virtual machine running in more than one place, which can cause data corruption in the virtual machine. For information on configuring your system to reduce the chances of administrators accidentally "double-starting" virtual machines by using both cluster and non-cluster tools, refer to Section 2.12, “Configuring Virtual Machines in a Clustered Environment”.
Note
Virtual machine resources are configured differently than other cluster resources; they are configured as services. To configure a virtual machine resource with luci, at the detailed menu for the cluster (below the menu), click , then click . You can then enter the virtual machine resource parameters. For information on configuring cluster services, refer to Section 3.9, “Adding a Cluster Service to the Cluster”.
Field | Description |
---|---|
Virtual machine name | Specifies the name of the virtual machine. |
Path to VM configuration files |
A colon-delimited path specification that
xm create searches for the virtual machine configuration file. For example: /etc/xen:/guests/config_files:/var/xen/configs
Important
The path should never directly point to a virtual machine configuration file.
|
VM Migration Mapping |
Specifies an alternate interface for migration. You can specify this when, for example, the network address used for virtual machine migration on a node differs from the address of the node used for cluster communication.
Specifying the following indicates that when you migrate a virtual machine from
member to member2 , you actually migrate to target2 . Similarly, when you migrate from member2 to member , you migrate using target .
member:target,member2:target2
|
Migration type | Specifies a migration type of live or pause . The default setting is live . |
Hypervisor | Hypervisor URI (automatic, KVM, or Xen) |
Automatically start this service | If enabled, this virtual machine is started automatically after the cluster forms a quorum. If this parameter is disabled, this virtual machine service is not started automatically after the cluster forms a quorum; the virtual machine is put into the disabled state. |
Run exclusive | If enabled, this virtual machine can only be relocated to run on another node exclusively; that is, to run on a node that has no other virtual machines running on it. If no nodes are available for a virtual machine to run exclusively, the virtual machine is not restarted after a failure. Additionally, other virtual machines do not automatically relocate to a node running this virtual machine as Run exclusive . You can override this option by manual start or relocate operations. |
Failover Domain | Defines lists of cluster members to try in the event that a virtual machine fails. |
Recovery policy | Recovery policy provides the following options:
|
Maximum number of restart failures before relocating | Maximum number of restarts for an independent subtree before giving up. |
Length of time in seconds after which to forget a restart | Amount of time before a failure is forgotten for an independent subtree. |