Este contenido no está disponible en el idioma seleccionado.
Chapter 12. Pacemaker Cluster Properties
Cluster properties control how the cluster behaves when confronted with situations that may occur during cluster operation.
- Table 12.1, “Cluster Properties” describes the cluster properties options.
- Section 12.2, “Setting and Removing Cluster Properties” describes how to set cluster properties.
- Section 12.3, “Querying Cluster Property Settings” describes how to list the currently set cluster properties.
12.1. Summary of Cluster Properties and Options
Table 12.1, “Cluster Properties” summaries the Pacemaker cluster properties, showing the default values of the properties and the possible values you can set for those properties.
Note
In addition to the properties described in this table, there are additional cluster properties that are exposed by the cluster software. For these properties, it is recommended that you not change their values from their defaults.
Option | Default | Description |
---|---|---|
batch-limit | 0 | |
migration-limit | -1 (unlimited) | |
no-quorum-policy | stop |
* ignore - continue all resource management
* freeze - continue resource management, but do not recover resources from nodes not in the affected partition
* stop - stop all resources in the affected cluster partition
* suicide - fence all nodes in the affected cluster partition
|
symmetric-cluster | true | |
stonith-enabled | true |
Indicates that failed nodes and nodes with resources that cannot be stopped should be fenced. Protecting your data requires that you set this
true .
If
true , or unset, the cluster will refuse to start resources unless one or more STONITH resources have been configured also.
|
stonith-action | reboot | |
cluster-delay | 60s | |
stop-orphan-resources | true | |
stop-orphan-actions | true | |
start-failure-is-fatal | true |
Indicates whether a failure to start a resource on a particular node prevents further start attempts on that node. When set to
false , the cluster will decide whether to try starting on the same node again based on the resource's current failure count and migration threshold. For information on setting the migration-threshold option for a resource, see Section 8.2, “Moving Resources Due to Failure”.
Setting
start-failure-is-fatal to false incurs the risk that this will allow one faulty node that is unable to start a resource to hold up all dependent actions. This is why start-failure-is-fatal defaults to true . The risk of setting start-failure-is-fatal=false can be mitigated by setting a low migration threshold so that other actions can proceed after that many failures.
|
pe-error-series-max | -1 (all) | |
pe-warn-series-max | -1 (all) | |
pe-input-series-max | -1 (all) | |
cluster-infrastructure | ||
dc-version | ||
last-lrm-refresh | ||
cluster-recheck-interval | 15 minutes |
Polling interval for time-based changes to options, resource parameters and constraints. Allowed values: Zero disables polling, positive values are an interval in seconds (unless other SI units are specified, such as 5min). Note that this value is the maximum time between checks; if a cluster event occurs sooner than the time specified by this value, the check will be done sooner.
|
maintenance-mode | false | |
shutdown-escalation | 20min | |
stonith-timeout | 60s | |
stop-all-resources | false | |
enable-acl | false | |
placement-strategy | default |
Indicates whether and how the cluster will take utilization attributes into account when determining resource placement on cluster nodes. For information on utilization attributes and placement strategies, see Section 9.6, “Utilization and Placement Strategy”.
|
fence-reaction | stop |
(Red Hat Enterprise Linux 7.8 and later) Determines how a cluster node should react if notified of its own fencing. A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that does not cut cluster communication. Allowed values are
stop to attempt to immediately stop Pacemaker and stay stopped, or panic to attempt to immediately reboot the local node, falling back to stop on failure.
|