此内容没有您所选择的语言版本。
Chapter 10. Replacing the primary Gluster storage node
When self-signed encryption is enabled, replacing a node is a disruptive process that requires virtual machines and the Hosted Engine to be shut down.
- (Optional) If encryption using a Certificate Authority is enabled, follow the steps at the following link before continuing: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/ch22s04.
Move the node to be replaced into Maintenance mode
- In Red Hat Virtualization Manager, click the Hosts tab and select the Red Hat Gluster Storage node in the results list.
- Click Maintenance to open the Maintenance Host(s) confirmation window.
- Click OK to move the host to Maintenance mode.
Install the replacement node
Follow the instructions in the following sections of Deploying Red Hat Enterprise Linux based RHHI to install the physical machine and configure storage on the new node.
- Installing host physical machines
- Configuring Public Key based SSH Authentication without a password
- Configuring RHGS for Hosted Engine using the Cockpit UI
Prepare the replacement node
- Create a file called replace_node_prep.conf based on the template provided in Section B.2, “Example gdeploy configuration file for preparing a replacement host”.
From a node with
gdeploy
installed (usually the node that hosts the Hosted Engine), run gdeploy using the new configuration file:gdeploy -c replace_node_prep.conf
# gdeploy -c replace_node_prep.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
(Optional) If encryption with self-signed certificates is enabled
- Generate the private key and self-signed certificate on the replacement node. See the Red Hat Gluster Storage Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/chap-network_encryption#chap-Network_Encryption-Prereqs.
On a healthy node, make a backup copy of the /etc/ssl/glusterfs.ca file:
cp /etc/ssl/glusterfs.ca /etc/ssl/glusterfs.ca.bk
# cp /etc/ssl/glusterfs.ca /etc/ssl/glusterfs.ca.bk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Append the new node’s certificate to the content of the /etc/ssl/glusterfs.ca file.
- Distribute the /etc/ssl/glusterfs.ca file to all nodes in the cluster, including the new node.
Run the following command on the replacement node to enable management encryption:
touch /var/lib/glusterd/secure-access
# touch /var/lib/glusterd/secure-access
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include the new server in the value of the
auth.ssl-allow
volume option by running the following command for each volume.gluster volume set <volname> auth.ssl-allow "<old_node1>,<old_node2>,<new_node>"
# gluster volume set <volname> auth.ssl-allow "<old_node1>,<old_node2>,<new_node>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the glusterd service on all nodes
systemctl restart glusterd
# systemctl restart glusterd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Follow the steps in Section 4.1, “Configuring TLS/SSL using self-signed certificates” to remount all gluster processes.
Add the replacement node to the cluster
Run the following command from any node already in the cluster.
peer probe <new_node>
# peer probe <new_node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the Hosted Engine into Maintenance mode:
hosted-engine --set-maintenance --mode=global
# hosted-engine --set-maintenance --mode=global
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the ovirt-engine service
systemctl stop ovirt-engine
# systemctl stop ovirt-engine
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the database
sudo -u postgres psql
# sudo -u postgres psql \c engine; UPDATE storage_server_connections SET connection ='<replacement_node_IP>:/engine' WHERE connection = ‘<old_server_IP>:/engine'; UPDATE storage_server_connections SET connection ='<replacement_node_IP>:/vmstore' WHERE connection = ‘<old_server_IP>:/vmstore'; UPDATE storage_server_connections SET connection ='<replacement_node_IP>:/data' WHERE connection = '<old_server_IP>:/data';
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the ovirt-engine service
systemctl start ovirt-engine
# systemctl start ovirt-engine
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop all virtual machines except the Hosted Engine.
- Move all storage domains except the Hosted Engine domain into Maintenance mode
Stop the Hosted Engine virtual machine
Run the following command on the existing node that hosts the Hosted Engine.
hosted-engine --vm-shutdown
# hosted-engine --vm-shutdown
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop high availability services on all nodes
systemctl stop ovirt-ha-agent systemctl stop ovirt-ha-broker
# systemctl stop ovirt-ha-agent # systemctl stop ovirt-ha-broker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disconnect Hosted Engine storage from the virtualization host
Run the following command on the existing node that hosts the Hosted Engine.
hosted-engine --disconnect-storage
# hosted-engine --disconnect-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Hosted Engine configuration file
Edit the storage parameter in the
/etc/ovirt-hosted-engine/hosted-engine.conf
file to use the replacement server.storage=<replacement_server_IP>:/engine
storage=<replacement_server_IP>:/engine
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the existing and replacement nodes
Wait until both nodes are available before continuing.
Take the Hosted Engine out of Maintenance mode
hosted-engine --set-maintenance --mode=none
# hosted-engine --set-maintenance --mode=none
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify replacement node is used
On all virtualization hosts, verify that the engine volume is mounted from the replacement node by checking the IP address in the output of the
mount
command.Activate storage domains
Verify that storage domains mount using the IP address of the replacement node.
Remove the old node
- Using the RHV Management UI, remove the old node.
Detach the old host from the cluster.
gluster peer detach <old_node_IP> force
# gluster peer detach <old_node_IP> force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Using the RHV Management UI, add the replacement node
Specify that the replacement node be used to host the Hosted Engine.
Move the replacement node into Maintenance mode.
hosted-engine --set-maintenance --mode=global
# hosted-engine --set-maintenance --mode=global
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Hosted Engine configuration file
Edit the storage parameter in the
/etc/ovirt-hosted-engine/hosted-engine.conf
file to use the replacement node.storage=<replacement_node_IP>:/engine
storage=<replacement_node_IP>:/engine
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the replacement node.
Wait until the node is back online before continuing.
Activate the replacement node from the RHV Management UI.
Ensure that all volumes are mounted using the IP address of the replacement node.
Replace engine volume brick
Replace the brick on the old node that belongs to the engine volume with a new brick on the replacement node.
- Click the Volumes tab.
- Click the Bricks subtab.
- Select the brick to replace, and then click Replace brick.
- Select the node that hosts the brick being replaced.
- In the Replace brick window, provide the new brick’s path.
On the replacement node, run the following command to remove metadata from the previous host.
hosted-engine --clean-metadata --host-id=<old_host_id> --force-clean
# hosted-engine --clean-metadata --host-id=<old_host_id> --force-clean
Copy to Clipboard Copied! Toggle word wrap Toggle overflow