Questo contenuto non è disponibile nella lingua selezionata.
Chapter 10. Replacing the primary Gluster storage node
When self-signed encryption is enabled, replacing a node is a disruptive process that requires virtual machines and the Hosted Engine to be shut down.
- (Optional) If encryption using a Certificate Authority is enabled, follow the steps at the following link before continuing: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/ch22s04.
Move the node to be replaced into Maintenance mode
- In Red Hat Virtualization Manager, click the Hosts tab and select the Red Hat Gluster Storage node in the results list.
- Click Maintenance to open the Maintenance Host(s) confirmation window.
- Click OK to move the host to Maintenance mode.
Install the replacement node
Follow the instructions in the following sections of Deploying Red Hat Enterprise Linux based RHHI to install the physical machine and configure storage on the new node.
- Installing host physical machines
- Configuring Public Key based SSH Authentication without a password
- Configuring RHGS for Hosted Engine using the Cockpit UI
Prepare the replacement node
- Create a file called replace_node_prep.conf based on the template provided in Section B.2, “Example gdeploy configuration file for preparing a replacement host”.
From a node with
gdeployinstalled (usually the node that hosts the Hosted Engine), run gdeploy using the new configuration file:# gdeploy -c replace_node_prep.conf
(Optional) If encryption with self-signed certificates is enabled
- Generate the private key and self-signed certificate on the replacement node. See the Red Hat Gluster Storage Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/chap-network_encryption#chap-Network_Encryption-Prereqs.
On a healthy node, make a backup copy of the /etc/ssl/glusterfs.ca file:
# cp /etc/ssl/glusterfs.ca /etc/ssl/glusterfs.ca.bk- Append the new node’s certificate to the content of the /etc/ssl/glusterfs.ca file.
- Distribute the /etc/ssl/glusterfs.ca file to all nodes in the cluster, including the new node.
Run the following command on the replacement node to enable management encryption:
# touch /var/lib/glusterd/secure-accessInclude the new server in the value of the
auth.ssl-allowvolume option by running the following command for each volume.# gluster volume set <volname> auth.ssl-allow "<old_node1>,<old_node2>,<new_node>"Restart the glusterd service on all nodes
# systemctl restart glusterd- Follow the steps in Section 4.1, “Configuring TLS/SSL using self-signed certificates” to remount all gluster processes.
Add the replacement node to the cluster
Run the following command from any node already in the cluster.
# peer probe <new_node>Move the Hosted Engine into Maintenance mode:
# hosted-engine --set-maintenance --mode=globalStop the ovirt-engine service
# systemctl stop ovirt-engineUpdate the database
# sudo -u postgres psql \c engine; UPDATE storage_server_connections SET connection ='<replacement_node_IP>:/engine' WHERE connection = ‘<old_server_IP>:/engine'; UPDATE storage_server_connections SET connection ='<replacement_node_IP>:/vmstore' WHERE connection = ‘<old_server_IP>:/vmstore'; UPDATE storage_server_connections SET connection ='<replacement_node_IP>:/data' WHERE connection = '<old_server_IP>:/data';Start the ovirt-engine service
# systemctl start ovirt-engine- Stop all virtual machines except the Hosted Engine.
- Move all storage domains except the Hosted Engine domain into Maintenance mode
Stop the Hosted Engine virtual machine
Run the following command on the existing node that hosts the Hosted Engine.
# hosted-engine --vm-shutdownStop high availability services on all nodes
# systemctl stop ovirt-ha-agent # systemctl stop ovirt-ha-brokerDisconnect Hosted Engine storage from the virtualization host
Run the following command on the existing node that hosts the Hosted Engine.
# hosted-engine --disconnect-storageUpdate the Hosted Engine configuration file
Edit the storage parameter in the
/etc/ovirt-hosted-engine/hosted-engine.conffile to use the replacement server.storage=<replacement_server_IP>:/engineReboot the existing and replacement nodes
Wait until both nodes are available before continuing.
Take the Hosted Engine out of Maintenance mode
# hosted-engine --set-maintenance --mode=noneVerify replacement node is used
On all virtualization hosts, verify that the engine volume is mounted from the replacement node by checking the IP address in the output of the
mountcommand.Activate storage domains
Verify that storage domains mount using the IP address of the replacement node.
Remove the old node
- Using the RHV Management UI, remove the old node.
Detach the old host from the cluster.
# gluster peer detach <old_node_IP> force
Using the RHV Management UI, add the replacement node
Specify that the replacement node be used to host the Hosted Engine.
Move the replacement node into Maintenance mode.
# hosted-engine --set-maintenance --mode=globalUpdate the Hosted Engine configuration file
Edit the storage parameter in the
/etc/ovirt-hosted-engine/hosted-engine.conffile to use the replacement node.storage=<replacement_node_IP>:/engineReboot the replacement node.
Wait until the node is back online before continuing.
Activate the replacement node from the RHV Management UI.
Ensure that all volumes are mounted using the IP address of the replacement node.
Replace engine volume brick
Replace the brick on the old node that belongs to the engine volume with a new brick on the replacement node.
- Click the Volumes tab.
- Click the Bricks subtab.
- Select the brick to replace, and then click Replace brick.
- Select the node that hosts the brick being replaced.
- In the Replace brick window, provide the new brick’s path.
On the replacement node, run the following command to remove metadata from the previous host.
# hosted-engine --clean-metadata --host-id=<old_host_id> --force-clean