Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Configuring a Red Hat High Availability cluster on Red Hat OpenStack Platform
Installing and configuring HA clusters and cluster resources on RHOSP instances
Abstract
Providing feedback on Red Hat documentation Link kopierenLink in die Zwischenablage kopiert!
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Preface Link kopierenLink in die Zwischenablage kopiert!
You can use the Red Hat High Availability Add-On to configure a Red Hat High Availability (HA) cluster on Red Hat OpenStack Platform (RHOSP) instances. This requires that you install the required packages and agents, configure a basic cluster, configure fencing resources, and configure HA cluster resources.
For RHOSP documentation, see Product Documentation for Red Hat Openstack Platform.
For Red Hat’s policies, requirements, and limitations applicable to the use of RHOSP instances in a RHEL High Availability cluster, see Support Policies for RHEL High Availability Clusters - OpenStack Virtual Machines as Cluster Members - Red Hat Customer Portal.
Chapter 2. RHOSP server group configuration for HA instances Link kopierenLink in die Zwischenablage kopiert!
Create an instance server group before you create the RHOSP HA cluster node instances. Group the instances by affinity policy. If you configure multiple clusters, ensure that you have only one server group per cluster.
The affinity policy you set for the server group can determine whether the cluster remains operational if the hypervisor fails.
The default affinity policy is affinity. With this affinity policy, all of the cluster nodes could be created on the same RHOSP hypervisor. In this case, if the hypervisor fails the entire cluster fails. For this reason, set an affinity policy for the server group of anti-affinity or soft-anti-affinity.
-
With an affinity policy of
anti-affinity, the server group allows only one cluster node per Compute node. Attempting to create more cluster nodes than Compute nodes generates an error. While this configuration provides the highest protection against RHOSP hypervisor failures, it may require more resources to deploy large clusters than you have available. -
With an affinity policy of
soft-anti-affinity, the server group distributes cluster nodes as evenly as possible across all Compute nodes. Although this provides less protection against hypervisor failures than a policy ofanti-affinity, it provides a greater level of high availability than an affinity policy ofaffinity.
Determining the server group affinity policy for your deployment requires balancing your cluster needs against the resources you have available by taking the following cluster components into account:
- The number of nodes in the cluster
- The number of RHOSP Compute nodes available
- The number of nodes required for cluster quorum to retain cluster operations
For information about affinity and creating an instance server group, Compute scheduler filters and the Command Line Interface Reference.
Chapter 3. Installing the high availability and RHOSP packages and agents Link kopierenLink in die Zwischenablage kopiert!
Install the packages required for configuring a Red Hat High Availability cluster on Red Hat OpenStack Platform (RHOSP). You must install the packages on each of the nodes you will use as cluster members.
Prerequisites
- A server group for the RHOSP instances to use as HA cluster nodes, configured as described in RHOSP server group configuration for HA instances
An RHOSP instance for each HA cluster node
- The instances are members of a server group
- The instances are configured as nodes running RHEL 8.7 or later
Procedure
Enable the RHEL HA repositories and the RHOSP tools channel.
subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms subscription-manager repos --enable=openstack-16-tools-for-rhel-8-x86_64-rpms
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms # subscription-manager repos --enable=openstack-16-tools-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Red Hat High Availability Add-On software packages and the packages that are required for the RHOSP cluster resource agents and the RHOSP fence agents.
yum install pcs pacemaker python3-openstackclient python3-novaclient fence-agents-openstack
# yum install pcs pacemaker python3-openstackclient python3-novaclient fence-agents-openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Installing the
pcsandpacemakerpackages on each node creates the userhacluster, which is thepcsadministration account. Create a password for userhaclusteron all cluster nodes. Using the same password for all nodes simplifies cluster administration.passwd hacluster
# passwd haclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
firewalld.serviceis installed, add the high-availability service to the RHEL firewall.firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --add-service=high-availabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
pcsservice and enable it to start on boot.systemctl start pcsd.service systemctl enable pcsd.service
# systemctl start pcsd.service # systemctl enable pcsd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
pcsservice is running.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the
/etc/hostsfile and add RHEL host names and internal IP addresses. For information about/etc/hosts, see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes?.
Chapter 4. Setting up an authentication method for RHOSP Link kopierenLink in die Zwischenablage kopiert!
The high availability fence agents and resource agents support three authentication methods for communicating with RHOSP:
-
Authentication with a
clouds.yamlconfiguration file - Authentication with an OpenRC environment script
-
Authentication with a
usernameand password through Pacemaker
After determining the authentication method to use for the cluster, specify the appropriate authentication parameters when creating a fencing or cluster resource.
4.1. Authenticating with RHOSP by using a clouds.yaml file Link kopierenLink in die Zwischenablage kopiert!
The procedures in this document that use a a clouds.yaml file for authentication use the clouds.yaml file shown in this procedure. Those procedures specify ha-example for the cloud= parameter, as defined in this file.
Procedure
On each node that will be part of your cluster, create a
clouds.yamlfile, as in the following example. For information about creating aclouds.yamlfile, see Users and Identity Management Guide.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command, substituting the name of the cloud you specified in the
clouds.yamlfile you created forha-example. If this command does not display a server list, contact your RHOSP administrator.openstack --os-cloud=ha-example server list
$ openstack --os-cloud=ha-example server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Specify the cloud parameter when creating a cluster resource or a fencing resource.
4.2. Authenticating with RHOSP by using an OpenRC environment script Link kopierenLink in die Zwischenablage kopiert!
To use an OpenRC environment script to authenticate with RHOSP, perform the following steps.
Procedure
- On each node that will be part of your cluster, configure an OpenRC environment script. For information about creating an OpenRC environment script, see Set environment variables using the OpenStack RC file.
Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command. If this command does not display a server list, contact your RHOSP administrator.
openstack server list
$ openstack server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Specify the
openrcparameter when creating a cluster resource or a fencing resource.
4.3. Authenticating with RHOSP by means of a username and password Link kopierenLink in die Zwischenablage kopiert!
To authenticate with RHOSP by means of a username and password, specify the username, password, and auth_url parameters for a cluster resource or a fencing resource when you create the resource. Additional authentication parameters may be required, depending on the RHOSP configuration. The RHOSP administrator provides the authentication parameters to use.
Chapter 5. Creating a basic cluster on Red Hat OpenStack Platform Link kopierenLink in die Zwischenablage kopiert!
This procedure creates a high availability cluster on an RHOSP platform with no fencing or resources configured.
Prerequisites
- An RHOSP instance is configured for each HA cluster node
- The HA cluster node is running RHEL 8.7 or later
- High Availability and RHOSP packages are installed on each node, as described in Installing the high availability and RHOSP packages and agents.
Procedure
On one of the cluster nodes, enter the following command to authenticate the
pcsuserhacluster. Specify the name of each node in the cluster. In this example, the nodes for the cluster arenode01,node02, andnode03.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster. In this example, the cluster is named
newcluster.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enable the cluster.
pcs cluster enable --all
[root@node01 ~]# pcs cluster enable --all node01: Cluster Enabled node02: Cluster Enabled node03: Cluster EnabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the cluster. The command’s output indicates whether the cluster has started on each node.
pcs cluster start --all
[root@node01 ~]# pcs cluster start --all node02: Starting Cluster… node03: Starting Cluster… node01: Starting Cluster...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Configuring fencing for an HA cluster on Red Hat OpenStack Platform Link kopierenLink in die Zwischenablage kopiert!
Fencing configuration ensures that a malfunctioning node on your HA cluster is automatically isolated. This prevents the node from consuming the cluster’s resources or compromising the cluster’s functionality.
Use the fence_openstack fence agent to configure a fence device for an HA cluster on RHOSP. You can view the options for the RHOSP fence agent with the following command.
pcs stonith describe fence_openstack
# pcs stonith describe fence_openstack
Prerequisites
- A configured HA cluster running on RHOSP
- Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP
The cluster property
stonith-enabledset totrue, which is the default value. Red Hat does not support clusters when fencing is disabled, as it is not suitable for a production environment. Run the following command to ensure that fencing is enbaled.pcs property config --all
# pcs property config --all Cluster Properties: . . . stonith-enabled: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Complete the following steps from any node in the cluster.
Determine the UUID for each node in your cluster.
The following command displays the full list of all of the RHOSP instance names within the
ha-exampleproject along with the UUID for the cluster node associated with that RHOSP instance, under the headingID. The node host name might not match the RHOSP instance name.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the fencing device, using the
pcmk_host_mapparameter to map each node in the cluster to the UUID for that node. Each of the following example fence device creation commands uses a different authentication method.The following command creates a
fence_openstackfencing device for a 3-node cluster, using aclouds.yamlconfiguration file for authentication. For thecloud= parameter, specify the name of the cloud in your clouds.yaml` file.pcs stonith create fenceopenstack fence_openstack pcmk_host_map="node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb" power_timeout="240" pcmk_reboot_timeout="480" pcmk_reboot_retries="4" cloud="ha-example"
# pcs stonith create fenceopenstack fence_openstack pcmk_host_map="node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb" power_timeout="240" pcmk_reboot_timeout="480" pcmk_reboot_retries="4" cloud="ha-example"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following command creates a
fence_openstackfencing device, using an OpenRC environment script for authentication.pcs stonith create fenceopenstack fence_openstack pcmk_host_map="node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb" power_timeout="240" pcmk_reboot_timeout="480" pcmk_reboot_retries="4" openrc="/root/openrc"
# pcs stonith create fenceopenstack fence_openstack pcmk_host_map="node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb" power_timeout="240" pcmk_reboot_timeout="480" pcmk_reboot_retries="4" openrc="/root/openrc"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following command creates a
fence_openstackfencing device, using a user name and password for authentication. The authentication parameters, includingusername,password,project_name, andauth_url, are provided by the RHOSP administrator.pcs stonith create fenceopenstack fence_openstack pcmk_host_map="node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb" power_timeout="240" pcmk_reboot_timeout="480" pcmk_reboot_retries="4" username="XXX" password="XXX" project_name="rhelha" auth_url="XXX" user_domain_name="Default"
# pcs stonith create fenceopenstack fence_openstack pcmk_host_map="node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb" power_timeout="240" pcmk_reboot_timeout="480" pcmk_reboot_retries="4" username="XXX" password="XXX" project_name="rhelha" auth_url="XXX" user_domain_name="Default"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Configuring ACPI For use with integrated fence devices.
Verification
From one node in the cluster, fence a different node in the cluster and check the cluster status. If the fenced node is offline, the fencing operation was successful.
[root@node01 ~] # pcs stonith fence node02 [root@node01 ~] # pcs status
[root@node01 ~] # pcs stonith fence node02 [root@node01 ~] # pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the node that you fenced and check the status to verify that the node started.
[root@node01 ~] # pcs cluster start node02 [root@node01 ~] # pcs status
[root@node01 ~] # pcs cluster start node02 [root@node01 ~] # pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Configuring HA cluster resources on Red Hat OpenStack Platform Link kopierenLink in die Zwischenablage kopiert!
The following table lists the RHOSP-specific resource agents you use to configure resources for an HA cluster on RHOSP.
|
|
Provides support for RHOSP-specific resource agents. You must configure an |
|
|
Configures a virtual IP address resource. For information about configuring an |
|
|
Configures a floating IP address resource. For information about configuring an |
|
|
Configures a block storage resource. For information about configuring an |
When configuring other cluster resources, use the standard Pacemaker resource agents.
7.1. Configuring an openstack-info resource in an HA cluster on Red Hat OpenStack Platform (required) Link kopierenLink in die Zwischenablage kopiert!
You must configure an openstack-info resource in order to run any other RHOSP-specific resource agent except for the fence_openstack fence agent.
This procedure to create an openstack-info resource uses a clouds.yaml file for RHOSP authentication.
Prerequisites
- A configured HA cluster running on RHOSP
- Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP
Procedure
Complete the following steps from any node in the cluster.
To view the options for the
openstack-inforesource agent, run the following command.pcs resource describe openstack-info
# pcs resource describe openstack-infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
openstack-inforesource as a clone resource. In this example, the resource is also namedopenstack-info. This example uses aclouds.yamlconfiguration file and thecloud=parameter is set to the name of the cloud in yourclouds.yamlfile.pcs resource create openstack-info openstack-info cloud="ha-example" clone
# pcs resource create openstack-info openstack-info cloud="ha-example" cloneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster status to verify that the resource is running.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Configuring a virtual IP address in an HA cluster on Red Hat Openstack Platform Link kopierenLink in die Zwischenablage kopiert!
This procedure to create an RHOSP virtual IP address resource for an HA cluster on an RHOSP platform uses a clouds.yaml file for RHOSP authentication.
The RHOSP virtual IP resource operates in conjunction with an IPaddr2 cluster resource. When you configure an RHOSP virtual IP address resource, the resource agent ensures that the RHOSP infrastructure associates the virtual IP address with a cluster node on the network. This allows an IPaddr2 resource to function on that node.
Prerequisites
- A configured HA cluster running on RHOSP
- An assigned IP address to use as the virtual IP address
- Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP
Procedure
Complete the following steps from any node in the cluster.
To view the options for the
openstack-virtual-ipresource agent, run the following command.pcs resource describe openstack-virtual-ip
# pcs resource describe openstack-virtual-ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to determine the subnet ID for the virtual IP address you are using. In this example, the virtual IP address is 172.16.0.119.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the RHOSP virtual IP address resource.
The following command creates an RHOSP virtual IP address resource for an IP address of 172.16.0.119, specifying the subnet ID you determined in the previous step.
pcs resource create ClusterIP-osp ocf:heartbeat:openstack-virtual-ip cloud=ha-example ip=172.16.0.119 subnet_id=723c5a77-156d-4c3b-b53c-ee73a4f75185
# pcs resource create ClusterIP-osp ocf:heartbeat:openstack-virtual-ip cloud=ha-example ip=172.16.0.119 subnet_id=723c5a77-156d-4c3b-b53c-ee73a4f75185Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure ordering and location constraints:
-
Ensure that the
openstack-inforesource starts before the virtual IP address resource. Ensure that the Virtual IP address resource runs on the same node as the
openstack-inforesource.pcs constraint order start openstack-info-clone then ClusterIP-osp pcs constraint colocation add ClusterIP-osp with openstack-info-clone score=INFINITY
# pcs constraint order start openstack-info-clone then ClusterIP-osp Adding openstack-info-clone ClusterIP-osp (kind: Mandatory) (Options: first-action=start then-action=start) # pcs constraint colocation add ClusterIP-osp with openstack-info-clone score=INFINITYCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Ensure that the
Create an
IPaddr2resource for the virtual IP address.pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.0.119
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.0.119Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure ordering and location constraints to ensure that the
openstack-virtual-ipresource starts before theIPaddr2resource and that theIPaddr2resource runs on the same node as theopenstack-virtual-ipresource.pcs constraint order start ClusterIP-osp then ClusterIP pcs constraint colocation add ClusterIP with ClusterIP-osp
# pcs constraint order start ClusterIP-osp then ClusterIP Adding ClusterIP-osp ClusterIP (kind: Mandatory) (Options: first-action=start then-action=start) # pcs constraint colocation add ClusterIP with ClusterIP-ospCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the resource constraint configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster status to verify that the resources are running.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Configuring a floating IP address in an HA cluster on Red Hat OpenStack Platform Link kopierenLink in die Zwischenablage kopiert!
The following procedure creates a floating IP address resource for an HA cluster on RHOSP. This procedure uses a clouds.yaml file for RHOSP authentication.
Prerequisites
- A configured HA cluster running on RHOSP
- An IP address on the public network to use as the floating IP address, assigned by the RHOSP administrator
- Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP
Procedure
Complete the following steps from any node in the cluster.
To view the options for the
openstack-floating-ipresource agent, run the following command.pcs resource describe openstack-floating-ip
# pcs resource describe openstack-floating-ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find the subnet ID for the address on the public network that you will use to create the floating IP address resource.
The public network is usually the network with the default gateway. Run the following command to display the default gateway address.
route -n | grep ^0.0.0.0 | awk '{print $2}'# route -n | grep ^0.0.0.0 | awk '{print $2}' 172.16.0.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to find the subnet ID for the public network. This command generates a table with ID and Subnet headings.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the floating IP address resource, specifying the public IP address for the resource and the subnet ID for that address. When you configure the floating IP address resource, the resource agent configures a virtual IP address on the public network and associates it with a cluster node.
pcs resource create float-ip openstack-floating-ip cloud="ha-example" ip_id="10.19.227.211" subnet_id="723c5a77-156d-4c3b-b53c-ee73a4f75185"
# pcs resource create float-ip openstack-floating-ip cloud="ha-example" ip_id="10.19.227.211" subnet_id="723c5a77-156d-4c3b-b53c-ee73a4f75185"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure an ordering constraint to ensure that the
openstack-inforesource starts before the floating IP address resource.pcs constraint order start openstack-info-clone then float-ip
# pcs constraint order start openstack-info-clone then float-ip Adding openstack-info-clone float-ip (kind: Mandatory) (Options: first-action=start then-action=startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a location constraint to ensure that the floating IP address resource runs on the same node as the
openstack-inforesource.pcs constraint colocation add float-ip with openstack-info-clone score=INFINITY
# pcs constraint colocation add float-ip with openstack-info-clone score=INFINITYCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the resource constraint configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster status to verify that the resources are running.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Configuring a block storage resource in an HA cluster on Red Hat OpenStack Platform Link kopierenLink in die Zwischenablage kopiert!
The following procedure creates a block storage resource for an HA cluster on RHOSP. This procedure uses a clouds.yaml file for RHOSP authentication.
Prerequisites
- A configured HA cluster running on RHOSP
- A block storage volume created by the RHOSP administrator
- Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP
Procedure
Complete the following steps from any node in the cluster.
To view the options for the
openstack-cinder-volumeresource agent, run the following command.pcs resource describe openstack-cinder-volume
# pcs resource describe openstack-cinder-volumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the volume ID of the block storage volume you are configuring as a cluster resource.
Run the following command to display a table of available volumes that includes the UUID and name of each volume.
openstack --os-cloud=ha-example volume list
# openstack --os-cloud=ha-example volume list | ID | Name | | 23f67c9f-b530-4d44-8ce5-ad5d056ba926| testvolume-cinder-data-disk |Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you already know the volume name, you can run the following command, specifying the volume you are configuring. This displays a table with an ID field.
openstack --os-cloud=ha-example volume show testvolume-cinder-data-disk
# openstack --os-cloud=ha-example volume show testvolume-cinder-data-diskCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the block storage resource, specifying the ID for the volume.
pcs resource create cinder-vol openstack-cinder-volume volume_id="23f67c9f-b530-4d44-8ce5-ad5d056ba926" cloud="ha-example"
# pcs resource create cinder-vol openstack-cinder-volume volume_id="23f67c9f-b530-4d44-8ce5-ad5d056ba926" cloud="ha-example"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure an ordering constraint to ensure that the
openstack-inforesource starts before the block storage resource.pcs constraint order start openstack-info-clone then cinder-vol
# pcs constraint order start openstack-info-clone then cinder-vol Adding openstack-info-clone cinder-vol (kind: Mandatory) (Options: first-action=start then-action=startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a location constraint to ensure that the block storage resource runs on the same node as the
openstack-inforesource.pcs constraint colocation add cinder-vol with openstack-info-clone score=INFINITY
# pcs constraint colocation add cinder-vol with openstack-info-clone score=INFINITYCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the resource constraint configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster status to verify that the resource is running.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow