Chapter 11. Determining which nodes a resource can run on
Location constraints determine which nodes a resource can run on. You can configure location constraints to determine whether a resource will prefer or avoid a specified node.
In addition to location constraints, the node on which a resource runs is influenced by the resource-stickiness value for that resource, which determines to what degree a resource prefers to remain on the node where it is currently running. For information about setting the resource-stickiness value, see Configuring a resource to prefer its current node.
11.1. Configuring location constraints Copy linkLink copied to clipboard!
You can configure a location constraint to control which nodes a cluster resource can run on in a cluster. You can use a location constraint to make a resource prefer a specific node or to prevent it from running on certain nodes.
11.1.1. Configuring a basic location constraint Copy linkLink copied to clipboard!
You can configure a basic location constraint to specify whether a resource prefers or avoids a node, with an optional score value to indicate the relative degree of preference for the constraint.
Procedure
The following command creates a location constraint for a resource to prefer the specified node or nodes. Note that it is possible to create constraints on a particular resource for more than one node with a single command:
# pcs constraint location rsc prefers node[=score] [node[=score]] ...The following command creates a location constraint for a resource to avoid the specified node or nodes:
# pcs constraint location rsc avoids node[=score] [node[=score]] ...The following table summarizes the meanings of the basic options for configuring location constraints:
Expand Table 11.1. Location Constraint Options Field Description rscA resource name
nodeA node’s name
scorePositive integer value to indicate the degree of preference for whether the given resource should prefer or avoid the given node.
INFINITYis the defaultscorevalue for a resource location constraint.A value of
INFINITYforscorein apcs constraint location rsc preferscommand indicates that the resource will prefer that node if the node is available, but does not prevent the resource from running on another node if the specified node is unavailable.A value of
INFINITYforscorein apcs constraint location rsc avoidscommand indicates that the resource will never run on that node, even if no other node is available. This is the equivalent of setting apcs constraint location addcommand with a score of-INFINITY.A numeric score (that is, not
INFINITY) means the constraint is optional, and will be honored unless some other factor outweighs it. For example, if the resource is already placed on a different node, and itsresource-stickinessscore is higher than apreferslocation constraint’s score, then the resource will be left where it is.
11.1.2. Configuring a location constraint with regular expressions Copy linkLink copied to clipboard!
pcs supports regular expressions in location constraints to match resource names. Use this feature to configure multiple location constraints with a single command.
Procedure
The following command creates a location constraint to specify that resources
dummy0todummy9prefernode1:# pcs constraint location 'regexp%dummy[0-9]' prefers node1
11.1.3. Configuring a location constraint with extended regular expressions Copy linkLink copied to clipboard!
Since Pacemaker uses POSIX extended regular expressions as documented at 9.4 Extended Regular Expressions section of the The Open Group Base Specifications Issue 7, you can specify the same constraint with the following command.
Procedure
To configure a location constraint with extended regular expressions:
# pcs constraint location 'regexp%dummy[[:digit:]]' prefers node1
11.1.4. Displaying location constraints Copy linkLink copied to clipboard!
View location constraints to verify resource placement logic. You can organize the display by resource or node, filter for specific items, and include expired constraints or internal IDs for troubleshooting.
Procedure
To list all current location constraints:
# pcs constraint location [config [resources [resource...]] | [nodes [node...]]] [--full]-
If
resourcesis specified, the command displays constraints per resource (default). -
If
nodesis specified, the command displays constraints per node. - If you specify specific resources or nodes, the command displays only that information.
-
If
To display all current location, order, and colocation constraints, use the following command. To show the internal constraint IDS, specify the
--fulloption:# pcs constraint [config] [--full]By default, listing resource constraints does not display expired constraints. To include expired constraints in the listing, use the
--alloption of thepcs constraintcommand. This will list expired constraints, noting the constraints and their associated rules as(expired)in the display.To list the constraints that reference specific resources:
# pcs constraint ref resource ...
11.2. Limiting resource discovery to a subset of nodes Copy linkLink copied to clipboard!
Before Pacemaker starts a resource anywhere, it first runs a one-time monitor operation (often referred to as a "probe") on every node, to learn whether the resource is already running. This process of resource discovery can result in errors on nodes that are unable to execute the monitor.
When configuring a location constraint on a node, you can use the resource-discovery option of the pcs constraint location command to indicate a preference for whether Pacemaker should perform resource discovery on this node for the specified resource. Limiting resource discovery to a subset of nodes the resource is physically capable of running on can significantly boost performance when a large set of nodes is present. When pacemaker_remote is in use to expand the node count into the hundreds of nodes range, this option should be considered.
The following command shows the format for specifying the resource-discovery option of the pcs constraint location command. In this command, a positive value for score corresponds to a basic location constraint that configures a resource to prefer a node, while a negative value for score corresponds to a basic location`constraint that configures a resource to avoid a node. As with basic location constraints, you can use regular expressions for resources with these constraints as well.
# pcs constraint location add id rsc node score [resource-discovery=option]
The following table summarizes the meanings of the basic parameters for configuring constraints for resource discovery.
| Field | Description |
|
| A user-chosen name for the constraint itself. |
|
| A resource name |
|
| A node’s name |
|
| Integer value to indicate the degree of preference for whether the given resource should prefer or avoid the given node. A positive value for score corresponds to a basic location constraint that configures a resource to prefer a node, while a negative value for score corresponds to a basic location constraint that configures a resource to avoid a node.
A value of
A numeric score (that is, not |
|
|
*
*
* |
Setting resource-discovery to never or exclusive removes Pacemaker’s ability to detect and stop unwanted instances of a service running where it is not supposed to be. It is up to the system administrator to make sure that the service can never be active on nodes without resource discovery (such as by leaving the relevant software uninstalled).
11.3. Configuring a location constraint strategy Copy linkLink copied to clipboard!
When using location constraints, you can configure a general strategy for specifying which nodes a resource can run on.
- Opt-in clusters - Configure a cluster in which, by default, no resource can run anywhere and then selectively enable allowed nodes for specific resources.
- Opt-out clusters - Configure a cluster in which, by default, all resources can run anywhere and then create location constraints for resources that are not allowed to run on specific nodes.
Whether you should choose to configure your cluster as an opt-in or opt-out cluster depends on both your personal preference and the make-up of your cluster. If most of your resources can run on most of the nodes, then an opt-out arrangement is likely to result in a simpler configuration. On the other hand, if most resources can only run on a small subset of nodes an opt-in configuration might be simpler.
Configuring an "Opt-In" cluster
To create an opt-in cluster, set the symmetric-cluster cluster property to false to prevent resources from running anywhere by default.
# pcs property set symmetric-cluster=false
Enable nodes for individual resources. The following commands configure location constraints so that the resource Webserver prefers node example-1, the resource Database prefers node example-2, and both resources can fail over to node example-3 if their preferred node fails. When configuring location constraints for an opt-in cluster, setting a score of zero allows a resource to run on a node without indicating any preference to prefer or avoid the node.
# pcs constraint location Webserver prefers example-1=200
# pcs constraint location Webserver prefers example-3=0
# pcs constraint location Database prefers example-2=200
# pcs constraint location Database prefers example-3=0
Configuring an "Opt-Out" cluster
To create an opt-out cluster, set the symmetric-cluster cluster property to true to allow resources to run everywhere by default. This is the default configuration if symmetric-cluster is not set explicitly.
# pcs property set symmetric-cluster=true
The following commands will then yield a configuration that is equivalent to the example in "Configuring an "Opt-In" cluster". Both resources can fail over to node example-3 if their preferred node fails, since every node has an implicit score of 0.
# pcs constraint location Webserver prefers example-1=200
# pcs constraint location Webserver avoids example-2=INFINITY
# pcs constraint location Database avoids example-1=INFINITY
# pcs constraint location Database prefers example-2=200
Note that it is not necessary to specify a score of INFINITY in these commands, since that is the default value for the score.
11.4. Configuring a resource to prefer its current node Copy linkLink copied to clipboard!
Configure resource-stickiness to define a resource’s preference for remaining on its current node. Pacemaker compares this value against other scores, such as location constraints, to prevent unnecessary migrations. For setup details, see Configuring resource meta options.
With a resource-stickiness value of 0, a cluster may move resources as needed to balance resources across nodes. This may result in resources moving when unrelated resources start or stop. With a positive stickiness, resources have a preference to stay where they are, and move only if other circumstances outweigh the stickiness. This may result in newly-added nodes not getting any resources assigned to them without administrator intervention.
Newly-created clusters set the default value for resource-stickiness to 1. This small value can easily be overridden by other constraints that you create, but it is enough to prevent Pacemaker from needlessly moving healthy resources around the cluster. If you prefer cluster behavior that results from a resource-stickiness value of 0, you can change the resource-stickiness default value to 0 with the following command:
Example 11.1. Example command
# pcs resource defaults update resource-stickiness=0
With a positive resource-stickiness value, no resources will move to a newly-added node. If resource balancing is desired at that point, you can temporarily set the resource-stickiness value to 0.
Note that if a location constraint score is higher than the resource-stickiness value, the cluster may still move a healthy resource to the node where the location constraint points.