Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 20. Replacing Networker nodes
In certain circumstances a Red Hat OpenStack Platform (RHOSP) node with a Networker profile in a high availability cluster might fail. (For more information, see Tagging nodes into profiles in the Director Installation and Usage guide.) In these situations, you must remove the node from the cluster and replace it with a new Networker node that runs the Networking service (neutron) agents.
The topics in this section are:
20.1. Preparing to replace network nodes Link kopierenLink in die Zwischenablage kopiert!
Replacing a Networker node on a Red Hat OpenStack Platform (RHOSP) overcloud, requires that you perform several preparation steps. Completing all of the required preparation steps helps you to avoid complications during the Networker node replacement process.
Prerequisites
- Your RHOSP deployment is highly available with three or more Networker nodes.
Procedure
- Log in to your undercloud as the stack user.
Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the current status of the overcloud stack on the undercloud:
openstack stack list --nested
$ openstack stack list --nested
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The overcloud stack and its subsequent child stacks should have a status of either
CREATE_COMPLETE
orUPDATE_COMPLETE
.Ensure that you have a recent backup image of the undercloud node by running the Relax-and-Recover tool.
For more information, see the Backing up and restoring the undercloud and control plane nodes guide.
- Log in to a Controller node as root.
Open an interactive bash shell on the container and check the status of the Galera cluster:
pcs status
# pcs status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Controller nodes are in
Master
mode.Sample output
* Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf::heartbeat:galera): Master controller-0 * galera-bundle-1 (ocf::heartbeat:galera): Master controller-1 * galera-bundle-2 (ocf::heartbeat:galera): Master controller-2
* Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf::heartbeat:galera): Master controller-0 * galera-bundle-1 (ocf::heartbeat:galera): Master controller-1 * galera-bundle-2 (ocf::heartbeat:galera): Master controller-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log on to the RHOSP director node and check the nova-compute service:
sudo systemctl status tripleo_nova_compute openstack baremetal node list
$ sudo systemctl status tripleo_nova_compute $ openstack baremetal node list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output should show all non-maintenance mode nodes as up.
Make sure all undercloud services are running:
sudo systemctl -t service
$ sudo systemctl -t service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.2. Replacing a Networker node Link kopierenLink in die Zwischenablage kopiert!
In certain circumstances a Red Hat OpenStack Platform (RHOSP) node with a Networker profile in a high availability cluster might fail. Replacing a Networker node requires running the openstack overcloud deploy
command to update the overcloud with a the new node.
Prerequisites
- Your RHOSP deployment is highly available with three or more Networker nodes.
- The node that you add must be able to connect to the other nodes in the cluster over the network.
- You have performed the steps described in Section 20.1, “Preparing to replace network nodes”
Procedure
-
Log in to your undercloud as the
stack
user. Source the undercloud credentials file:
Example
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the index of the node to remove:
openstack baremetal node list -c UUID -c Name -c "Instance UUID"
$ openstack baremetal node list -c UUID -c Name -c "Instance UUID"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the node into maintenance mode by using the
baremetal node maintenance set
command.Example
openstack baremetal node maintenance set e6499ef7-3db2-4ab4-bfa7-ef59539bf972
$ openstack baremetal node maintenance set e6499ef7-3db2-4ab4-bfa7-ef59539bf972
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a JSON file to add the new node to the node pool that contains RHOSP director.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Adding nodes to the overcloud in the Director Installation and Usage guide.
Run the
openstack overcloud node import
command to register the new node.Example
openstack overcloud node import newnode.json
$ openstack overcloud node import newnode.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After registering the new node, launch the introspection process by using the following commands:
openstack baremetal node manage <node> openstack overcloud node introspect <node> --provide
$ openstack baremetal node manage <node> $ openstack overcloud node introspect <node> --provide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Tag the new node with the Networker profile by using the
openstack baremetal node set
command.Example
openstack baremetal node set --property \ capabilities='profile:networker,boot_option:local' \ 91eb9ac5-7d52-453c-a017-c0e3d823efd0
$ openstack baremetal node set --property \ capabilities='profile:networker,boot_option:local' \ 91eb9ac5-7d52-453c-a017-c0e3d823efd0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
~/templates/remove-networker.yaml
environment file that defines the index of the node that you intend to remove:Example
parameters: NetworkerRemovalPolicies: [{'resource_list': ['1']}]
parameters: NetworkerRemovalPolicies: [{'resource_list': ['1']}]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
~/templates/node-count-networker.yaml
environment file and set the total count of Networker nodes in the file.Example
parameter_defaults: OvercloudNetworkerFlavor: networker NetworkerCount: 3
parameter_defaults: OvercloudNetworkerFlavor: networker NetworkerCount: 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
openstack overcloud deploy
command and include the core heat templates, environment files, and the environment files that you modified.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/node-count-networker.yaml \ -e /home/stack/templates/remove-networker.yaml
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/node-count-networker.yaml \ -e /home/stack/templates/remove-networker.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow RHOSP director removes the old Networker node, creates a new one, and updates the overcloud stack.
Verification
Check the status of the overcloud stack:
openstack stack list --nested
$ openstack stack list --nested
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new Networker node is listed, and the old one is removed.
openstack server list -c ID -c Name -c Status
$ openstack server list -c ID -c Name -c Status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.3. Rescheduling nodes and cleaning up the Networking service Link kopierenLink in die Zwischenablage kopiert!
As a part of replacing a Red Hat OpenStack Platform (RHOSP) Networker node, remove all Networking service agents on the removed node from the database. Doing so ensures that the Networking service does not identify the agents as out-of-service ("dead"). For ML2/OVS users, removing agents from the removed node enables the DHCP resources to be automatically rescheduled to other Networker nodes.
Prerequisites
- Your RHOSP deployment is highly available with three or more Networker nodes.
Procedure
- Log in to your undercloud as the stack user.
Source the overcloud credentials file:
Example
source ~/overcloudrc
$ source ~/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the RHOSP Networking service processes exist, and are marked out-of-service (
xxx
) for theovercloud-networker-1
.openstack network agent list -c ID -c Binary -c Host -c Alive | grep overcloud-networker-1
$ openstack network agent list -c ID -c Binary -c Host -c Alive | grep overcloud-networker-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output for ML2/OVN
+--------------------------------------+-----------------------+-------+-------------------------------+ | ID | Host | Alive | Binary | +--------------------------------------+-----------------------+-------+-------------------------------+ | 26316f47-4a30-4baf-ba00-d33c9a9e0844 | overcloud-networker-1 | xxx | ovn-controller | +--------------------------------------+-----------------------+-------+-------------------------------+
+--------------------------------------+-----------------------+-------+-------------------------------+ | ID | Host | Alive | Binary | +--------------------------------------+-----------------------+-------+-------------------------------+ | 26316f47-4a30-4baf-ba00-d33c9a9e0844 | overcloud-networker-1 | xxx | ovn-controller | +--------------------------------------+-----------------------+-------+-------------------------------+
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output for ML2/OVS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Capture the UUIDs of the agents registered for
overcloud-networker-1
.AGENT_UUIDS=$(openstack network agent list -c ID -c Host -c Alive -c Binary -f value | grep overcloud-networker-1 | cut -d\ -f1)
$ AGENT_UUIDS=$(openstack network agent list -c ID -c Host -c Alive -c Binary -f value | grep overcloud-networker-1 | cut -d\ -f1)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete any remaining
overcloud-networker-1
agents from the database.for agent in $AGENT_UUIDS; do neutron agent-delete $agent ; done
$ for agent in $AGENT_UUIDS; do neutron agent-delete $agent ; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Deleted agent(s): 26316f47-4a30-4baf-ba00-d33c9a9e0844
Deleted agent(s): 26316f47-4a30-4baf-ba00-d33c9a9e0844
Copy to Clipboard Copied! Toggle word wrap Toggle overflow