Appendix B. Manual Procedure Automated by Ansible Playbooks


The Ansible-based solution provided by this document is designed to automate a manual procedure for configuring Instance HA in a supported manner. For reference, this appendix provides the steps automated by the solution.

  1. Stop and disable the Compute service on each Compute node:

    heat-admin@compute-n $ sudo systemctl stop openstack-nova-compute
    heat-admin@compute-n $ sudo systemctl disable openstack-nova-compute
    Copy to Clipboard Toggle word wrap
  2. Create an authentication key for use with pacemaker-remote. Perform this step on the director node:

    stack@director # dd if=/dev/urandom of=~/authkey bs=4096 count=1
    Copy to Clipboard Toggle word wrap
  3. Copy this key to the Compute and Controller nodes:

    stack@director # scp authkey heat-admin@node-n:~/
    stack@director # ssh heat-admin@node-n:~/
    heat-admin@node-n $ sudo mkdir -p --mode=0750 /etc/pacemaker
    heat-admin@node-n $ sudo chgrp haclient /etc/pacemaker
    heat-admin@node-n $ sudo mv authkey /etc/pacemaker/
    heat-admin@node-n $ sudo chown root:haclient /etc/pacemaker/authkey
    Copy to Clipboard Toggle word wrap
  4. On each Compute node, enable the pacemaker-remote service and configure the firewall accordingly:

    heat-admin@compute-n $ sudo systemctl enable pacemaker_remote
    heat-admin@compute-n $ sudo systemctl start pacemaker_remote
    heat-admin@compute-n $ sudo iptables -I INPUT 11 -p tcp --dport 3121 -j ACCEPT ; /sbin/service iptables save
    Copy to Clipboard Toggle word wrap
  5. Confirm that the required versions of the pacemaker (1.1.12-22.el7_1.4.x86_64) and resource-agents (3.9.5-40.el7_1.5.x86_64) packages are installed on the Controller and Compute nodes:

    heat-admin@controller-n $ sudo rpm -qa | egrep \'(pacemaker|resource-agents)'
    Copy to Clipboard Toggle word wrap
  6. Create a NovaEvacuate active/passive resource using the overcloudrc file to provide the auth_url, username, tenant and password values:

    stack@director # scp overcloudrc heat-admin@controller-1:~/
    heat-admin@controller-1 $ . ~/overcloudrc
    heat-admin@controller-1 $ sudo pcs resource create nova-evacuate ocf:openstack:NovaEvacuate auth_url=$OS_AUTH_URL username=$OS_USERNAME password=$OS_PASSWORD tenant_name=$OS_TENANT_NAME
    Copy to Clipboard Toggle word wrap
    Note

    If you are not using shared storage, include the no_shared_storage=1 option. See Section 2.1, “Exceptions for Shared Storage” for more information.

    Important

    As mentioned earlier in Chapter 2, Environment Requirements and Assumptions, the $OS_AUTH_URL must be the reachable by each Compute node. This environment variable should be set to either the overcloud’s authentication service or the internal authentication URL.

  7. Confirm that nova-evacuate is started after the floating IP resources, and the Image Service (glance), OpenStack Networking (neutron), Compute (nova) services:

    heat-admin@controller-1 $ for i in $(sudo pcs status | grep IP | awk \'{ print $1 }\'); do sudo pcs constraint order start $i then nova-evacuate ; done
    Copy to Clipboard Toggle word wrap
  8. Create a list of the current Controllers using cibadmin data:

    heat-admin@controller-1 $ controllers=$(sudo cibadmin -Q -o nodes | grep uname | sed s/.\*uname..// | awk -F\" \'{print $1}')
    heat-admin@controller-1 $ echo $controllers
    Copy to Clipboard Toggle word wrap
  9. Use this list to tag these nodes as Controllers with the osprole=controller property:

    heat-admin@controller-1 $ for controller in ${controllers}; do sudo pcs property set --node ${controller} osprole=controller ; done
    heat-admin@controller-1 $ sudo pcs property
    Copy to Clipboard Toggle word wrap

    The newly-assigned roles should be visible under the Node attributes section.

  10. Build a list of stonith devices already present in the environment:

    heat-admin@controller-1 $ stonithdevs=$(sudo pcs stonith | awk \'{print $1}')
    heat-admin@controller-1 $ echo $stonithdevs
    Copy to Clipboard Toggle word wrap
  11. Tag the control plane services to make sure they only run on the Controllers identified above, skipping any stonith devices listed:

    heat-admin@controller-1 $ for i in $(sudo cibadmin -Q --xpath //primitive --node-path | tr ' ' \'\n' | awk -F "id=\'" '{print $2}' | awk -F "\'" '{print $1}' | uniq); do
        found=0
        if [ -n "$stonithdevs" ]; then
            for x in $stonithdevs; do
                if [ $x = $i ]; then
                    found=1
                fi
    	    done
        fi
        if [ $found = 0 ]; then
            sudo pcs constraint location $i rule resource-discovery=exclusive score=0 osprole eq controller
        fi
    done
    Copy to Clipboard Toggle word wrap
  12. Populate the nova-compute resources within pacemaker:

    heat-admin@controller-1 $ . /home/heat-admin/overcloudrc
    heat-admin@controller-1 $ sudo pcs resource create nova-compute-checkevacuate ocf:openstack:nova-compute-wait auth_url=$OS_AUTH_URL username=$OS_USERNAME password=$OS_PASSWORD tenant_name=$OS_TENANT_NAME domain=localdomain op start timeout=300 --clone interleave=true --disabled --force
    Copy to Clipboard Toggle word wrap
    Note

    This command assumes that you are using the default cloud domain name localdomain. If you are using a custom cloud domain name, set it as the value of the domain= parameter.

    Important

    As mentioned earlier in Chapter 2, Environment Requirements and Assumptions, the $OS_AUTH_URL must be the reachable by each Compute node. This environment variable should be set to either the overcloud’s authentication service or the internal authentication URL.

    heat-admin@controller-1 $ sudo pcs constraint location nova-compute-checkevacuate-clone rule resource-discovery=exclusive score=0 osprole eq compute
    heat-admin@controller-1 $ sudo pcs resource create nova-compute systemd:openstack-nova-compute op start timeout=60s --clone interleave=true --disabled --force
    heat-admin@controller-1 $ sudo pcs constraint location nova-compute-clone rule resource-discovery=exclusive score=0 osprole eq compute
    heat-admin@controller-1 $ sudo pcs constraint order start nova-compute-checkevacuate-clone then nova-compute-clone require-all=true
    heat-admin@controller-1 $ sudo pcs constraint order start nova-compute-clone then nova-evacuate require-all=false
    Copy to Clipboard Toggle word wrap
  13. Add stonith devices for the Compute nodes. Run the following command for each Compute node:

    heat-admin@controller-1 $ sudo pcs stonith create ipmilan-overcloud-compute-N  fence_ipmilan pcmk_host_list=overcloud-compute-0 ipaddr=10.35.160.78 login=IPMILANUSER passwd=IPMILANPW lanplus=1 cipher=1 op monitor interval=60s;
    Copy to Clipboard Toggle word wrap

    Where:

    • N is the identifying number of each compute node (for example, ipmilan-overcloud-compute-1, ipmilan-overcloud-compute-2, and so on).
    • IPMILANUSER and IPMILANPW are the username and password to the IPMI device.
  14. Create a separate fence-nova stonith device:

    heat-admin@controller-1 $ . overcloudrc
    heat-admin@controller-1 $ sudo pcs stonith create fence-nova fence_compute \
                                    auth-url=$OS_AUTH_URL \
                                    login=$OS_USERNAME \
                                    passwd=$OS_PASSWORD \
                                    tenant-name=$OS_TENANT_NAME \
                                    record-only=1 --force \
                                    domain=localdomain
    Copy to Clipboard Toggle word wrap
    Note

    This command assumes that you are using the default cloud domain name localdomain. If you are using a custom cloud domain name, set it as the value of the domain= parameter.

    If you are not using shared storage, include the no_shared_storage=1 option. See Section 2.1, “Exceptions for Shared Storage” for more information.

  15. Configure the required constraints for fence-nova:

    heat-admin@controller-1 $ sudo pcs constraint location fence-nova rule resource-discovery=never score=0 osprole eq controller
    heat-admin@controller-1 $ sudo pcs constraint order promote galera-master then fence-nova require-all=false
    heat-admin@controller-1 $ sudo pcs constraint order start fence-nova then nova-compute-clone
    Copy to Clipboard Toggle word wrap
  16. Make certain the Compute nodes are able to recover after fencing:

    heat-admin@controller-1 $ sudo pcs property set cluster-recheck-interval=1min
    Copy to Clipboard Toggle word wrap
  17. Create Compute node resources and set the stonith level 1 to include both the nodes’s physical fence device and fence-nova. To do so, run the following for each Compute node:

    heat-admin@controller-1 $ sudo pcs resource create overcloud-compute-N ocf:pacemaker:remote reconnect_interval=60 op monitor interval=20
    heat-admin@controller-1 $ sudo pcs property set --node overcloud-compute-N osprole=compute
    heat-admin@controller-1 $ sudo pcs stonith level add 1 overcloud-compute-N ipmilan-overcloud-compute-N,fence-nova
    heat-admin@controller-1 $ sudo pcs stonith
    Copy to Clipboard Toggle word wrap

    Replace N with the identifying number of each compute node (for example, overcloud-compute-1, overcloud-compute-2, and so on). Use these identifying numbers to match each Compute nodes with the stonith devices created earlier (for example, overcloud-compute-1 and ipmilan-overcloud-compute-1).

Allow some time for the environment to settle before cleaning up any failed resources:

heat-admin@controller-1 $ sleep 60
heat-admin@controller-1 $ sudo pcs resource cleanup
heat-admin@controller-1 $ sudo pcs status
heat-admin@controller-1 $ sudo pcs property set stonith-enabled=true
Copy to Clipboard Toggle word wrap
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top