Chapter 18. Creating cluster resources that are active on multiple nodes (cloned resources)


You can clone a cluster resource so that the resource can be active on multiple nodes. For example, you can use cloned resources to configure multiple instances of an IP resource to distribute throughout a cluster for node balancing. You can clone any resource provided the resource agent supports it. A clone consists of one resource or one resource group.

Note

Only resources that can be active on multiple nodes at the same time are suitable for cloning. For example, a Filesystem resource mounting a non-clustered file system such as ext4 from a shared memory device should not be cloned. Since the ext4 partition is not cluster aware, this file system is not suitable for read/write operations occurring from multiple nodes at the same time.

18.1. Creating and removing a cloned resource

You can create a resource and a clone of that resource at the same time. Create a resource and clone of the resource with the following single command.

pcs resource create resource_id [standard:[provider:]]type [resource options] [meta resource meta options] clone [clone_id] [clone options]
Copy to Clipboard

You can create a clone of a previously-created resource or resource group with the following command.

pcs resource clone resource_id | group_id [clone_id][clone options]...
Copy to Clipboard

By default, the name of the clone will be resource_id-clone. You can set a custom name for the clone by specifying a value for the clone_id option.

You cannot create a resource group and a clone of that resource group in a single command.

Note

When you create a resource or resource group clone that will be ordered after another clone, you should almost always set the interleave=true option. This ensures that copies of the dependent clone can stop or start when the clone it depends on has stopped or started on the same node. If you do not set this option, if a cloned resource B depends on a cloned resource A and a node leaves the cluster, when the node returns to the cluster and resource A starts on that node, then all of the copies of resource B on all of the nodes will restart. This is because when a dependent cloned resource does not have the interleave option set, all instances of that resource depend on any running instance of the resource it depends on.

Use the following command to remove a clone of a resource or a resource group. This does not remove the resource or resource group itself.

pcs resource unclone resource_id | clone_id | group_name
Copy to Clipboard

Resource Clone Options

The following table describes the options you can specify for a cloned resource.

FieldDescription

priority, target-role, is-managed

Options inherited from resource that is being cloned, as described in the "Resource Meta Options" table in Configuring resource meta options.

clone-max

How many copies of the resource to start. Defaults to the number of nodes in the cluster.

clone-node-max

How many copies of the resource can be started on a single node; the default value is 1.

notify

When stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful. Allowed values: false, true. The default value is false.

globally-unique

Does each copy of the clone perform a different function? Allowed values: false, true

If the value of this option is false, these resources behave identically everywhere they are running and thus there can be only one copy of the clone active per machine.

If the value of this option is true, a copy of the clone running on one machine is not equivalent to another instance, whether that instance is running on another node or on the same node. The default value is true if the value of clone-node-max is greater than one; otherwise the default value is false.

ordered

Should the copies be started in series (instead of in parallel). Allowed values: false, true. The default value is false.

interleave

Changes the behavior of ordering constraints (between clones) so that copies of the first clone can start or stop as soon as the copy on the same node of the second clone has started or stopped (rather than waiting until every instance of the second clone has started or stopped). Allowed values: false, true. The default value is false.

clone-min

If a value is specified, any clones which are ordered after this clone will not be able to start until the specified number of instances of the original clone are running, even if the interleave option is set to true.

To achieve a stable allocation pattern, clones are slightly sticky by default, which indicates that they have a slight preference for staying on the node where they are running. If no value for resource-stickiness is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster. For information about setting the resource-stickiness resource meta-option, see Configuring resource meta options.

Procedure

The following procedure creates and removes a resource clone.

  1. On one node of the cluster, create the resource clone.

    When you create a clone of a resource, by default the clone takes on the name of the resource with -clone appended to the name. The following command creates a resource of type apache named webfarm and a clone of that resource named webfarm-clone.

    # pcs resource create webfarm apache clone
    Copy to Clipboard
  2. Remove the clone of a resource or a resource group. This does not remove the resource or resource group itself.

    # pcs resource unclone webfarm
    Copy to Clipboard

18.2. Configuring clone resource constraints

You can determine the behavior of a clone resource in a cluster by configuring constraints for that resourse. These constraints are written no differently than those for regular resources except that you must specify the clone’s ID. For information about resource constraints, see Configuring cluster resources.

Clone resource location constraints

In most cases, a clone will have a single copy on each active cluster node. You can, however, set clone-max for the resource clone to a value that is less than the total number of nodes in the cluster. If this is the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. For general information about location constraints, see Determining which node a resource can run on.

The following command creates a location constraint for the cluster to preferentially assign resource clone webfarm-clone to node1.

# pcs constraint location webfarm-clone prefers node1
Copy to Clipboard

Clone resource ordering constraints

Ordering constraints behave slightly differently for clones. In the example below, because the interleave clone option is left to default as false, no instance of webfarm-stats will start until all instances of webfarm-clone that need to be started have done so. Only if no copies of webfarm-clone can be started then webfarm-stats will be prevented from being active. Additionally, webfarm-clone will wait for webfarm-stats to be stopped before stopping itself.

# pcs constraint order start webfarm-clone then webfarm-stats
Copy to Clipboard

For general information about resource ordering constraints, see Determining the order in which cluster resources are run.

Clone resource colocation constraints

Colocation of a regular (or group) resource with a clone means that the resource can run on any machine with an active copy of the clone. The cluster will choose a copy based on where the clone is running and the resource’s own location preferences. For information about colocation constraints, see xref colocating-cluster-resources[Colocating cluster resources].

Colocation between clones is also possible. In such cases, the set of allowed locations for the clone is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally.

The following command creates a colocation constraint to ensure that the resource webfarm-stats runs on the same node as an active copy of webfarm-clone.

# pcs constraint colocation add webfarm-stats with webfarm-clone
Copy to Clipboard

18.3. Promotable clone resources

Promotable clone resources are clone resources with the promotable meta attribute set to true. They allow the instances to be in one of two operating modes; these are called promoted and unpromoted. The names of the modes do not have specific meanings, except for the limitation that when an instance is started, it must come up in the Unpromoted state.

Note

The Promoted and Unpromoted role names are the functional equivalent of the Master and Slave Pacemaker roles in previous RHEL releases.

18.3.1. Creating a promotable clone resource

You can create a resource as a promotable clone with the following single command.

pcs resource create resource_id [standard:[provider:]]type [resource options] promotable [clone_id] [clone options]
Copy to Clipboard

By default, the name of the promotable clone is resource_id-clone. You can set a custom name for the clone by specifying a value for the clone_id option.

Alternately, you can create a promotable resource from a previously-created resource or resource group with the following command.

pcs resource promotable resource_id [clone_id] [clone options]
Copy to Clipboard

By default, the name of the promotable clone is resource_id-clone or group_name-clone. You can set a custom name for the clone by specifying a value for the clone_id option.

The following table describes the extra clone options you can specify for a promotable resource.

Table 18.1. Extra Clone Options Available for Promotable Clones
FieldDescription

promoted-max

How many copies of the resource can be promoted; default 1.

promoted-node-max

How many copies of the resource can be promoted on a single node; default 1.

18.3.2. Configuring promotable resource constraints

You can determine the behavior of a promotable resource in a cluster by configuring constraints for that resource. For general information about resource constraints, see Configuring cluster resources.

Promotable resource location constraints

In most cases, a promotable resource will have a single copy on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently than those for regular resources. For information about location constraints, see Determining which node a resource can run on.

Promotable resource colocation constraints

You can create a colocation constraint which specifies whether the resources are operating in a promoted or unpromoted role. The following command creates a resource colocation constraint.

pcs constraint colocation add [promoted|unpromoted] source_resource with [promoted|unpromoted] target_resource [score] [options]
Copy to Clipboard

For information about colocation constraints, see Colocating cluster resources.

Promotable resource ordering constraints

When configuring an ordering constraint that includes promotable resources, one of the actions that you can specify for the resources is promote, indicating that the resource be promoted from unpromoted role to promoted role. Additionally, you can specify an action of demote, indicated that the resource be demoted from promoted role to unpromoted role.

The command for configuring an order constraint is as follows.

pcs constraint order [action] resource_id then [action] resource_id [options]
Copy to Clipboard

For information about resource order constraints, see Determining the order in which cluster resources are run.

18.3.3. Demoting a promoted resource on failure

You can configure a promotable resource so that when a promote or monitor action fails for that resource, or the partition in which the resource is running loses quorum, the resource will be demoted but will not be fully stopped. This can prevent the need for manual intervention in situations where fully stopping the resource would require it.

  • To configure a promotable resource to be demoted when a promote action fails, set the on-fail operation meta option to demote, as in the following example.

    # pcs resource op add my-rsc promote on-fail="demote"
    Copy to Clipboard
  • To configure a promotable resource to be demoted when a monitor action fails, set interval to a nonzero value, set the on-fail operation meta option to demote, and set role to Promoted, as in the following example.

    # pcs resource op add my-rsc monitor interval="10s" on-fail="demote" role="Promoted"
    Copy to Clipboard
  • To configure a cluster so that when a cluster partition loses quorum any promoted resources will be demoted but left running and all other resources will be stopped, set the no-quorum-policy cluster property to demote

Setting the on-fail meta-attribute to demote for an operation does not affect how promotion of a resource is determined. If the affected node still has the highest promotion score, it will be selected to be promoted again.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat