6.2. Pacemaker 命令


6.2.1. 启动和停止集群

要启动所有节点上的集群,请执行以下命令:

# pcs cluster start -all

重启后,只有在启用该服务时集群才会自动启动。该命令有助于了解集群是否已启动,以及是否启用了守护进程自动启动。

# pcs cluster status

集群自动启动可通过以下方式启用:

# pcs cluster enable --all

其他选项有:

  • 停止集群。
  • 将节点置于待机状态.
  • 将集群 置于维护模式

如需了解更多详细信息,请查看 pcs cluster help:

# pcs cluster stop --all
# pcs cluster help

6.2.2. 将集群 置于维护模式

如果要进行更改,并且希望避免 pacemaker 集群干扰,然后将集群 置于维护模式,将集群置于"freeze"集群,或者您可以将 SAPHana 资源 置于维护模式

# pcs property set maintenance-mode=true

验证 maintenance-mode 的一种简单方法是检查资源是否非受管。在集群处于 maintenance-mode 时刷新集群资源来检测资源状态,且不会更新资源状态:

# pcs resource refresh

这表示是否有任何内容都不正确,并且 is teh 集群会在退出 维护模式 后立即造成补救操作。

运行命令删除 maintenance-mode

# pcs property set maintenance-mode=false

现在,集群将继续工作。如果配置错误,它将立即做出反应。

6.2.3. 检查集群状态

以下是检查集群状态的几种方法:

  • 检查集群是否正在运行:

    # pcs cluster status
  • 检查集群和所有资源:

    # pcs status
  • 检查集群、所有资源和所有节点属性:

    # pcs status --full
  • 仅检查资源:

    # pcs resource status --full
  • 检查 Stonith 历史记录:

    # pcs stonith history
  • 检查位置限制:

    # pcs constraint location
注意

必须配置并测试隔离。为了获得尽可能自动化的解决方案,集群必须持续激活,然后让集群在重启后自动启动。在生产环境中,禁用重启可让手动干预,以便在崩溃后进行实例。您还必须检查守护进程状态。

Example:

# pcs status --full
Cluster name: cluster1
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-06-22 17:56:01 +02:00)
Cluster Summary:
  * Stack: corosync
  * Current DC: az2n1 (2) (version 2.1.5-7.el9-a3f44794f94) - partition with quorum
  * Last updated: Thu Jun 22 17:56:01 2023
  * Last change:  Thu Jun 22 17:53:34 2023 by root via crm_attribute on az1n1
  * 2 nodes configured
  * 6 resource instances configured
Node List:
  * Node az1n1 (1): online, feature set 3.16.2
  * Node az2n1 (2): online, feature set 3.16.2
Full List of Resources:
  * h7fence	(stonith:fence_rhevm):	 Started az2n1
  * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
    * SAPHanaTopology_RH2_02	(ocf:heartbeat:SAPHanaTopology):	 Started az1n1
    * SAPHanaTopology_RH2_02	(ocf:heartbeat:SAPHanaTopology):	 Started az2n1
  * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
    * SAPHana_RH2_02	(ocf:heartbeat:SAPHana):	 Promoted az1n1
    * SAPHana_RH2_02	(ocf:heartbeat:SAPHana):	 Unpromoted az2n1
  * vip_RH2_02_MASTER	(ocf:heartbeat:IPaddr2):	 Started az1n1
Node Attributes:
  * Node: az1n1 (1):
    * hana_rh2_clone_state            	: PROMOTED
    * hana_rh2_op_mode                	: logreplay
    * hana_rh2_remoteHost             	: az2n1
    * hana_rh2_roles                  	: 4:P:master1:master:worker:master
    * hana_rh2_site                   	: DC1
    * hana_rh2_sra                    	: -
    * hana_rh2_srah                   	: -
    * hana_rh2_srmode                 	: syncmem
    * hana_rh2_sync_state             	: PRIM
    * hana_rh2_version                	: 2.00.059.02
    * hana_rh2_vhost                  	: az1n1
    * lpa_rh2_lpt                     	: 1687449214
    * master-SAPHana_RH2_02           	: 150
  * Node: az2n1 (2):
    * hana_rh2_clone_state            	: DEMOTED
    * hana_rh2_op_mode                	: logreplay
    * hana_rh2_remoteHost             	: az1n1
    * hana_rh2_roles                  	: 4:S:master1:master:worker:master
    * hana_rh2_site                   	: DC2
    * hana_rh2_sra                    	: -
    * hana_rh2_srah                   	: -
    * hana_rh2_srmode                 	: syncmem
    * hana_rh2_sync_state             	: SOK
    * hana_rh2_version                	: 2.00.059.02
    * hana_rh2_vhost                  	: az2n1
    * lpa_rh2_lpt                     	: 30
    * master-SAPHana_RH2_02           	: 100
Migration Summary:
Tickets:
PCSD Status:
  az1n1: Online
  az2n1: Online
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

6.2.4. 检查资源状态

使用 pcs resource 检查所有资源的状态。这将打印资源的列表和当前状态。

Example:

# pcs resource
* rsc_ip_MASTER1	(ocf:heartbeat:IPaddr2):	 Started az3n1
  * rsc_ip_SLAVE1	(ocf:heartbeat:IPaddr2):	 Started az3n1
  * Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10]:
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n1 (Monitoring)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n1 (Monitoring)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n2 (Monitoring)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n2 (Monitoring)
    * Stopped: [ az3n1 ]
…

6.2.5. 检查资源配置

以下显示了当前的资源配置:

# pcs resource config
Resource: rsc_ip_MASTER1 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: rsc_ip_MASTER1-instance_attributes
    ip=192.168.10.120
  Operations:
    monitor: rsc_ip_MASTER1-monitor-interval-10s
      interval=10s timeout=20s\n
    start: rsc_ip_MASTER1-start-interval-0s
      interval=0s timeout=20s
    stop: rsc_ip_MASTER1-stop-interval-0s
      interval=0s timeout=20s
Resource: rsc_ip_SLAVE1 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: rsc_ip_SLAVE1-instance_attributes
    ip=192.168.10.130
  Meta Attributes: rsc_ip_SLAVE1-meta_attributes
    is-enabled=false
  Operations:
    monitor: rsc_ip_SLAVE1-monitor-interval-10s
      interval=10s timeout=20s
    start: rsc_ip_SLAVE1-start-interval-0s
      interval=0s timeout=20s
    stop: rsc_ip_SLAVE1-stop-interval-0s
      interval=0s timeout=20s
Clone: rsc_SAPHanaTopology_RH1_10-clone
  Meta Attributes: rsc_SAPHanaTopology_RH1_10-clone-meta_attributes
    clone-max=4
    clone-node-max=1
    interleave=true
  Resource: rsc_SAPHanaTopology_RH1_10 (class=ocf provider=heartbeat type=SAPHanaTopology)
    Attributes: rsc_SAPHanaTopology_RH1_10-instance_attributes
      InstanceNumber=10
      SID=RH1
    Operations:
      methods: rsc_SAPHanaTopology_RH1_10-methods-interval-0s
        interval=0s timeout=5
      monitor: rsc_SAPHanaTopology_RH1_10-monitor-interval-10
        interval=10 timeout=600
      reload: rsc_SAPHanaTopology_RH1_10-reload-interval-0s
        interval=0s timeout=5
      start: rsc_SAPHanaTopology_RH1_10-start-interval-0s
        interval=0s timeout=600
      stop: rsc_SAPHanaTopology_RH1_10-stop-interval-0s
        interval=0s timeout=300
Clone: rsc_SAPHanaController_RH1_10-clone
  Meta Attributes: rsc_SAPHanaController_RH1_10-clone-meta_attributes
    clone-max=4
    clone-node-max=1
    interleave=true
    promotable=true
  Resource: rsc_SAPHanaController_RH1_10 (class=ocf provider=heartbeat type=SAPHanaController)
    Attributes: rsc_SAPHanaController_RH1_10-instance_attributes
      AUTOMATED_REGISTER=true
      DUPLICATE_PRIMARY_TIMEOUT=7200
      InstanceNumber=10
      PREFER_SITE_TAKEOVER=true
      SID=RH1
    Meta Attributes: rsc_SAPHanaController_RH1_10-meta_attributes
      priority=100
    Operations:
      demote: rsc_SAPHanaController_RH1_10-demote-interval-0s
        interval=0s timeout=320
      methods: rsc_SAPHanaController_RH1_10-methods-interval-0s
        interval=0s timeout=5
      monitor: rsc_SAPHanaController_RH1_10-monitor-interval-59
        interval=59 timeout=700 role=Promoted
      monitor: rsc_SAPHanaController_RH1_10-monitor-interval-61
        interval=61 timeout=700 role=Unpromoted
      promote: rsc_SAPHanaController_RH1_10-promote-interval-0s
        interval=0s timeout=900
      reload: rsc_SAPHanaController_RH1_10-reload-interval-0s
        interval=0s timeout=5
      start: rsc_SAPHanaController_RH1_10-start-interval-0s
        interval=0s timeout=3600
      stop: rsc_SAPHanaController_RH1_10-stop-interval-0s
        interval=0s timeout=3600

这将列出用于配置安装和配置的资源代理的所有参数。

6.2.6. SAPHana 资源选项 AUTOMATED_REGISTER=true

如果在 SAPHana 资源中使用了这个选项,则 pacemaker 会自动重新注册二级数据库。

建议您在第一个测试中使用这个选项。如果您使用 AUTOMATED_REGISTER=false,管理员需要手动重新注册次要节点。

6.2.7. 处理资源

有几个选项用于管理资源。如需更多信息,请查看可用的帮助:

# pcs resource help

列出使用的资源代理:

# pcs resource config | grep "type=" | awk -F"type=" '{ print $2 }' | sed -e "s/)//g"

输出示例:

IPaddr2
SAPHanaTopology
SAPHanaController

显示特定资源代理描述和配置参数:

# pcs resource describe <resource agent>

示例(不带输出):

# pcs resource describe IPaddr2

资源代理 IPaddr2 示例(输出):

Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')
ocf:heartbeat:IPaddr2 - Manages virtual IPv4 and IPv6 addresses (Linux specific version)

This Linux-specific resource manages IP alias IP addresses. It can add an IP alias, or remove one. In
addition, it can implement Cluster Alias IP functionality if invoked as a clone resource.  If used as a
clone, "shared address with a trivial, stateless (autonomous) load-balancing/mutual exclusion on
ingress" mode gets applied (as opposed to "assume resource uniqueness" mode otherwise). For that, Linux
firewall (kernel and userspace) is assumed, and since recent distributions are ambivalent in plain
"iptables" command to particular back-end resolution, "iptables-legacy" (when present) gets prioritized
so as to avoid incompatibilities (note that respective ipt_CLUSTERIP firewall extension in use here is,
at the same time, marked deprecated, yet said "legacy" layer can make it workable, literally, to this
day) with "netfilter" one (as in "iptables-nft"). In that case, you should explicitly set clone-node-max
>= 2, and/or clone-max < number of nodes. In case of node failure, clone instances need to be re-
allocated on surviving nodes. This would not be possible if there is already an instance on those nodes,
and clone-node-max=1 (which is the default).  When the specified IP address gets assigned to a
respective interface, the resource agent sends unsolicited ARP (Address Resolution Protocol, IPv4) or NA
(Neighbor Advertisement, IPv6) packets to inform neighboring machines about the change. This
functionality is controlled for both IPv4 and IPv6 by shared 'arp_*' parameters.

Resource options:
  ip (required) (unique): The IPv4 (dotted quad notation) or IPv6 address (colon hexadecimal notation)
      example IPv4 "192.168.1.1". example IPv6 "2001:db8:DC28:0:0:FC57:D4C8:1FFF".
  nic: The base network interface on which the IP address will be brought online.  If left empty, the
      script will try and determine this from the routing table.  Do NOT specify an alias interface in
      the form eth0:1 or anything here; rather, specify the base interface only. If you want a label,
      see the iflabel parameter.  Prerequisite:  There must be at least one static IP address, which is
      not managed by the cluster, assigned to the network interface. If you can not assign any static IP
      address on the interface, modify this kernel parameter:  sysctl -w
      net.ipv4.conf.all.promote_secondaries=1 # (or per device)
  cidr_netmask: The netmask for the interface in CIDR format (e.g., 24 and not 255.255.255.0)  If
      unspecified, the script will also try to determine this from the routing table.
  broadcast: Broadcast address associated with the IP. It is possible to use the special symbols '+' and
      '-' instead of the broadcast address. In this case, the broadcast address is derived by
      setting/resetting the host bits of the interface prefix.
  iflabel: You can specify an additional label for your IP address here. This label is appended to your
      interface name.  The kernel allows alphanumeric labels up to a maximum length of 15 characters
      including the interface name and colon (e.g. eth0:foobar1234)  A label can be specified in nic
      parameter but it is deprecated. If a label is specified in nic name, this parameter has no effect.
  lvs_support: Enable support for LVS Direct Routing configurations. In case a IP address is stopped,
      only move it to the loopback device to allow the local node to continue to service requests, but
      no longer advertise it on the network.  Notes for IPv6: It is not necessary to enable this option
      on IPv6. Instead, enable 'lvs_ipv6_addrlabel' option for LVS-DR usage on IPv6.
  lvs_ipv6_addrlabel: Enable adding IPv6 address label so IPv6 traffic originating from the address's
      interface does not use this address as the source. This is necessary for LVS-DR health checks to
      realservers to work. Without it, the most recently added IPv6 address (probably the address added
      by IPaddr2) will be used as the source address for IPv6 traffic from that interface and since that
      address exists on loopback on the realservers, the realserver response to pings/connections will
      never leave its loopback. See RFC3484 for the detail of the source address selection.  See also
      'lvs_ipv6_addrlabel_value' parameter.
  lvs_ipv6_addrlabel_value: Specify IPv6 address label value used when 'lvs_ipv6_addrlabel' is enabled.
      The value should be an unused label in the policy table which is shown by 'ip addrlabel list'
      command. You would rarely need to change this parameter.
  mac: Set the interface MAC address explicitly. Currently only used in case of the Cluster IP Alias.
      Leave empty to chose automatically.
  clusterip_hash: Specify the hashing algorithm used for the Cluster IP functionality.
  unique_clone_address: If true, add the clone ID to the supplied value of IP to create a unique address
      to manage
  arp_interval: Specify the interval between unsolicited ARP (IPv4) or NA (IPv6) packets in
      milliseconds.  This parameter is deprecated and used for the backward compatibility only. It is
      effective only for the send_arp binary which is built with libnet, and send_ua for IPv6. It has no
      effect for other arp_sender.
  arp_count: Number of unsolicited ARP (IPv4) or NA (IPv6) packets to send at resource initialization.
  arp_count_refresh: For IPv4, number of unsolicited ARP packets to send during resource monitoring.
      Doing so helps mitigate issues of stuck ARP caches resulting from split-brain situations.
  arp_bg: Whether or not to send the ARP (IPv4) or NA (IPv6) packets in the background. The default is
      true for IPv4 and false for IPv6.
  arp_sender: For IPv4, the program to send ARP packets with on start. Available options are:  -
      send_arp: default  - ipoibarping: default for infiniband interfaces if ipoibarping is available  -
      iputils_arping: use arping in iputils package  - libnet_arping: use another variant of arping
      based on libnet
  send_arp_opts: For IPv4, extra options to pass to the arp_sender program. Available options are vary
      depending on which arp_sender is used.  A typical use case is specifying '-A' for iputils_arping
      to use ARP REPLY instead of ARP REQUEST as Gratuitous ARPs.
  flush_routes: Flush the routing table on stop. This is for applications which use the cluster IP
      address and which run on the same physical host that the IP address lives on. The Linux kernel may
      force that application to take a shortcut to the local loopback interface, instead of the
      interface the address is really bound to. Under those circumstances, an application may, somewhat
      unexpectedly, continue to use connections for some time even after the IP address is deconfigured.
      Set this parameter in order to immediately disable said shortcut when the IP address goes away.
  run_arping: For IPv4, whether or not to run arping for collision detection check.
  nodad: For IPv6, do not perform Duplicate Address Detection when adding the address.
  noprefixroute: Use noprefixroute flag (see 'man ip-address').
  preferred_lft: For IPv6, set the preferred lifetime of the IP address. This can be used to ensure that
      the created IP address will not be used as a source address for routing. Expects a value as
      specified in section 5.5.4 of RFC 4862.
  network_namespace: Specifies the network namespace to operate within. The namespace must already
      exist, and the interface to be used must be within the namespace.

Default operations:
  start:
    interval=0s
    timeout=20s
  stop:
    interval=0s
    timeout=20s
  monitor:
    interval=10s
    timeout=20s

如果集群停止,则所有资源也会停止;如果集群 被置于维护模式,则所有资源都会保持当前状态,但不会监控或管理。

6.2.8. 处理 maintenance-mode 的集群属性

列出所有定义的属性:

[root@az1n1] pcs property
Cluster Properties: cib-bootstrap-options
 cluster-infrastructure=corosync
 cluster-name=cluster1
 dc-version=2.1.7-5.2.el9_4-0f7f88312
  have-watchdog=false
  last-lrm-refresh=1747914571
  maintenance-mode=true
  stonith-enabled=false
10s
 stonith-timeout=900

要重新配置数据库,必须指示集群忽略任何更改,直到配置完成为止。您可以使用以下方法将集群 置于维护模式

# pcs property set maintenance-mode=true

检查 maintenance-mode

* rsc_ip_MASTER1	(ocf:heartbeat:IPaddr2):	 Started az1n3 (maintenance)
  * rsc_ip_SLAVE1	(ocf:heartbeat:IPaddr2):	 Started az1n3 (maintenance)
  * Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10] (maintenance):
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n1 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n1 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n2 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n2 (maintenance)
    * Stopped: [ az1n3 ]
  * Clone Set: rsc_SAPHanaFilesystem_RH1_10-clone [rsc_SAPHanaFilesystem_RH1_10] (maintenance):
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az2n1 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az1n1 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az1n2 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az2n2 (maintenance)
    * Stopped: [ az1n3 ]
  * Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az2n1 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az1n2 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az2n2 (maintenance)
    * Stopped: [ az1n1 az1n3 ]

验证所有资源是否为 "unmanaged":

[root@az1n1]# pcs status
Cluster name: cluster1
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: az2n1 (version 2.1.7-5.2.el9_4-0f7f88312) - partition with quorum
  * Last updated: Fri May 30 18:28:48 2025 on az2n1
  * Last change:  Fri May 30 18:20:26 2025 by root via root on az2n1
  * 5 nodes configured
  * 16 resource instances configured

              *** Resource management is DISABLED ***
  The cluster will not attempt to start, stop or recover services

Node List:
  * Online: [ az1n1 az1n2 az3n1 az2n1 az2n2 ]

Full List of Resources:
  * R9_fence_out	(stonith:fence_rhevm):	 Started az1n1 (maintenance)
  * rsc_ip_MASTER1	(ocf:heartbeat:IPaddr2):	 Started az3n1 (maintenance)
  * rsc_ip_SLAVE1	(ocf:heartbeat:IPaddr2):	 Started az3n1 (maintenance)
  * Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10] (maintenance):
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n1 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n1 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n2 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n2 (maintenance)
    * Stopped: [ az3n1 ]
  * Clone Set: rsc_SAPHanaFilesystem_RH1_10-clone [rsc_SAPHanaFilesystem_RH1_10] (maintenance):
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az2n1 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az1n1 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az1n2 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az2n2 (maintenance)
    * Stopped: [ az3n1 ]
  * Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az2n1 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az1n2 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az2n2 (maintenance)
    * Stopped: [ az1n1 az3n1 ]

Failed Resource Actions:
  * rsc_SAPHanaController_RH1_10 start on az1n1 returned 'error' at Fri May 30 17:48:27 2025 after 19.098s

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
The resources will switch back to managed if you unset the maintenance-mode:
# pcs property set maintenance-mode=false

6.2.9. 使用 Move 通过 SAPHana 资源故障切换

如何故障转移 SAP HANA 数据库的简单示例是使用 pcs resource move 命令。您需要使用克隆资源名称并移动资源,如下所示:

# pcs resource move <SAPHana-clone-resource>

在本例中,克隆资源是 SAPHana_RH2_02-clone

[root@az1n1]# pcs resource
 * rsc_ip_MASTER1	(ocf:heartbeat:IPaddr2):	 Started az3n1 (maintenance)
  * rsc_ip_SLAVE1	(ocf:heartbeat:IPaddr2):	 Started az3n1 (maintenance)
  * Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10] (maintenance):
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n1 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n1 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az1n2 (maintenance)
    * rsc_SAPHanaTopology_RH1_10	(ocf:heartbeat:SAPHanaTopology):	 Started az2n2 (maintenance)
    * Stopped: [ az3n1 ]
  * Clone Set: rsc_SAPHanaFilesystem_RH1_10-clone [rsc_SAPHanaFilesystem_RH1_10] (maintenance):
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az2n1 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az1n1 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az1n2 (maintenance)
    * rsc_SAPHanaFilesystem_RH1_10	(ocf:heartbeat:SAPHanaFilesystem):	 Started az2n2 (maintenance)
    * Stopped: [ az3n1 ]
  * Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az2n1 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az1n2 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted az2n2 (maintenance)
    * Stopped: [ az1n1 az3n1 ]

移动资源:

# pcs resource move SAPHana_RH2_02-clone
Location constraint to move resource 'SAPHana_RH2_02-clone' has been created
Waiting for the cluster to apply configuration changes...
Location constraint created to move resource 'SAPHana_RH2_02-clone' has been removed
Waiting for the cluster to apply configuration changes...
resource 'SAPHana_RH2_02-clone' is promoted on node 'az2n1'; unpromoted on node 'az1n1'

检查是否有剩余的限制:

# pcs constraint location

清除资源,以删除在故障转移中创建的那些位置限制。Example:

[root@az1n1]# pcs resource clear SAPHana_RH2_02-clone

检查 "Migration Summary" 中是否存在剩余的警告或条目:

# pcs status --full

检查 stonith 历史记录:

# pcs stonith history

如果需要,清除 stonith 历史记录:

# pcs stonith history cleanup

如果您使用早于 2.1.5 的 pacemaker 版本,请参阅 在运行 pcs resource move 时管理约束吗? 并检查剩余的限制。

6.2.10. 监控故障切换和同步状态

所有 pacemaker 活动都记录在集群节点上的 /var/log/messages 文件中。由于还有许多其他消息,有时很难读取与 SAP 资源代理相关的消息。您可以配置命令别名,仅过滤掉与 SAP 资源代理相关的消息。

别名 tmsl 示例:

# alias tmsl='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SAPSYSTEMNAME}_HDB${TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT"'

tsml 的输出示例:

[root@az1n1]# tmsl
Jun 22 13:59:54 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:04:06 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:04:06 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:04:06 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:08:21 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:08:21 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:08:21 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:08:23 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:08:23 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:08:23 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:12:35 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:12:35 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:12:36 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:14:01 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az2n1]: PROMOTED -> DEMOTED
Jun 22 14:14:02 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az2n1]: DEMOTED -> UNDEFINED
Jun 22 14:14:19 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az1n1]: DEMOTED -> PROMOTED
Jun 22 14:14:21 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:14:21 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:14:21 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:15:14 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:15:22 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az1n1]: SOK -> PRIM
Jun 22 14:15:23 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az2n1]: PRIM -> SOK
Jun 22 14:15:23 az1n1 SAPHana(SAPHana_RH2_02)[934810]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:15:25 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az2n1]: UNDEFINED -> DEMOTED
Jun 22 14:15:32 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az2n1]: SOK -> SFAIL
Jun 22 14:19:36 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az2n1]: SFAIL -> SOK
Jun 22 14:19:36 az1n1 SAPHana(SAPHana_RH2_02)[942693]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:23:49 az1n1 SAPHana(SAPHana_RH2_02)[950623]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:28:02 az1n1 SAPHana(SAPHana_RH2_02)[958633]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:32:15 az1n1 SAPHana(SAPHana_RH2_02)[966683]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:36:27 az1n1 SAPHana(SAPHana_RH2_02)[974736]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:40:40 az1n1 SAPHana(SAPHana_RH2_02)[982934]: INFO: ACT site=DC1, setting SOK for secondary (1)

该过滤器可以更轻松地了解发生状态更改。如果缺少详细信息,您可以打开整个消息文件来读取所有信息。

故障转移后,您可以清除资源,并检查没有剩余的位置限制。

6.2.11. 检查集群一致性

在安装过程中,资源有时会在配置最终完成前启动。这可能会导致集群信息基础(CIB)中的条目,这可能会导致行为不正确。这可以轻松检查,也可以在配置完成后手动更正。

如果您启动 SAPHana 资源,则会重新创建缺少的条目。pcs 命令无法解决错误的条目,您必须手动删除。

检查 CIB 条目:

# cibadmin --query

DC3 和 SFAIL 是不应存在于 Cluster Information Base 中的条目,当集群成员是 DC1 和 DC2 时,当节点间的 sync 状态报告为 SOK 时。

检查对应条目的示例:

# cibadmin --query |grep '"DC3"'
# cibadmin --query |grep '"SFAIL"'

命令可以作为 root 用户在集群中的任何节点上执行。通常,命令的输出为空。如果配置中存在错误,则输出可能类似如下:

        <nvpair id="SAPHanaSR-hana_rh1_glob_sec" name="hana_rh1_glob_sec" value="DC3"/>

使用以下命令可以删除这些条目:

# cibadmin --delete --xml-text '<...>'

要删除上例中的条目,您必须输入以下内容:您必须注意输出包含双引号,因此文本必须嵌入到单引号中:

# cibadmin --delete --xml-text '        <nvpair id="SAPHanaSR-hana_rh1_glob_sec" name="hana_rh1_glob_sec" value="DC3"/>'

验证没有删除的 CIB 条目。返回的输出应该为空。

# cibadmin --query |grep 'DC3"'

6.2.12. 清理集群

在故障转移测试过程中,可能会遗留在约束后,其他则保留在以前的测试中。在开始下一个测试前,需要从这些集群中清除集群。

检查故障事件的集群状态:

# pcs status --full

如果您在"Migration Summary"中看到集群警告或条目,您应该清除并清理资源:

# pcs resource clear SAPHana_RH2_02-clone
# pcs resource cleanup SAPHana_RH2_02-clone

输出:

Cleaned up SAPHana_RH2_02:0 on az1n1
Cleaned up SAPHana_RH2_02:1 on az2n1

检查是否有不需要的位置限制,例如从以前的故障切换中:

# pcs constraint location

更详细地检查现有限制:

# pcs constraint --full

资源移动后位置约束示例:

      Node: hana08 (score:-INFINITY) (role:Started) (id:cli-ban-SAPHana_RH2_02-clone-on-hana08)

清除这个位置约束:

# pcs resource clear SAPHana_RH2_02-clone

验证约束是否从约束列表中消失。如果保留,则使用其约束 id 明确删除:

# pcs constraint delete cli-ban-SAPHana_RH2_02-clone-on-hana08

如果您运行多个带有隔离的测试,您可能还清除 stonith 历史记录:

# pcs stonith history cleanup

所有 pcs 命令都是以 root 用户身份执行的。如需更多详细信息,请参阅 发现保留

6.2.13. 其他集群命令

各种集群命令示例

# pcs status --full
# crm_mon -1Arf # Provides an overview
# pcs resource # Lists all resources and shows if they are running
# pcs constraint --full # Lists all constraint ids which should be removed
# pcs cluster start --all # This will start the cluster on all nodes
# pcs cluster stop --all # This will stop the cluster on all nodes
# pcs node attribute # Lists node attributes

6.2.14. 其他维护过程

您还可以将单个资源设置为维护模式,而不是将整个集群放入维护模式。

# pcs resource meta rsc_SAPHanaController_RH1_10-clone maintenance=true
# pcs resource # will show the maintenance mode
…
* Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted ndc1hana02 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted ndc2hana01 (maintenance)
    * rsc_SAPHanaController_RH1_10	(ocf:heartbeat:SAPHanaController):	 Unpromoted ndc2hana02 (maintenance)
…
To leave the maintenance mode you can enter:
# pcs resource meta rsc_SAPHanaController_RH1_10-clone maintenance=false
It is also very important to refresh the resources after you leave the maintenance mode.
# pcs resource refresh



# crm_mon -1Arf # Provides an overview
# pcs resource #

[source,text]
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部