Questo contenuto non è disponibile nella lingua selezionata.
11.10. Replacing Hosts
11.10.1. Replacing a Host Machine with a Different Hostname Copia collegamentoCollegamento copiato negli appunti!
server0.example.com
and the replacement machine is server5.example.com
. The brick with an unrecoverable failure is server0.example.com:/rhgs/brick1
and the replacement brick is server5.example.com:/rhgs/brick1
.
- Stop the geo-replication session if configured by executing the following command.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Probe the new peer from one of the existing peers to bring it into the cluster.
gluster peer probe server5.example.com
# gluster peer probe server5.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the new brick
(server5.example.com:/rhgs/brick1)
that is replacing the old brick(server0.example.com:/rhgs/brick1)
is empty. - If the geo-replication session is configured, perform the following steps:
- Setup the geo-replication session by generating the ssh keys:
gluster system:: execute gsec_create
# gluster system:: execute gsec_create
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create geo-replication session again with
force
option to distribute the keys from new nodes to Slave nodes.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After successfully setting up the shared storage volume, when a new node is replaced in the cluster, the shared storage is not mounted automatically on this node. Neither is the
/etc/fstab
entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:mount -t glusterfs local node's ip:gluster_shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo local node's ip:/gluster_shared_storage
# mount -t glusterfs local node's ip:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo local node's ip:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .For more information on setting up shared storage volume, see Section 11.12, “Setting up Shared Storage Volume”. - Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on configuring meta-volume, see Section 10.3.5, “Configuring a Meta-Volume”.
- Retrieve the brick paths in
server0.example.com
using the following command:gluster volume info <VOLNAME>
# gluster volume info <VOLNAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Brick path inserver0.example.com
is/rhgs/brick1
. This has to be replaced with the brick in the newly added host,server5.example.com
. - Create the required brick path in server5.example.com.For example, if /rhs/brick is the XFS mount point in server5.example.com, then create a brick directory in that path.
mkdir /rhgs/brick1
# mkdir /rhgs/brick1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the
replace-brick
command with the force option:gluster volume replace-brick vol server0.example.com:/rhgs/brick1 server5.example.com:/rhgs/brick1 commit force
# gluster volume replace-brick vol server0.example.com:/rhgs/brick1 server5.example.com:/rhgs/brick1 commit force volume replace-brick: success: replace-brick commit successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the new brick is online.
gluster volume status
# gluster volume status Status of volume: vol Gluster process Port Online Pid Brick server5.example.com:/rhgs/brick1 49156 Y 5731 Brick server1.example.com:/rhgs/brick1 49153 Y 5354
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Initiate self-heal on the volume. The status of the heal process can be seen by executing the command:
gluster volume heal VOLNAME
# gluster volume heal VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The status of the heal process can be seen by executing the command:
gluster volume heal VOLNAME info
# gluster volume heal VOLNAME info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Detach the original machine from the trusted pool.
gluster peer detach (server)
# gluster peer detach (server) All clients mounted through the peer which is getting detached need to be remounted, using one of the other active peers in the trusted storage pool, this ensures that the client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y peer detach: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the extended attributestrusted.afr.vol-client-0
andtrusted.afr.vol-client-1
have zero values. This means that the data on the two bricks is identical. If these attributes are not zero after self-heal is completed, the data has not been synchronised correctly. - Start the geo-replication session using
force
option:gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10.2. Replacing a Host Machine with the Same Hostname Copia collegamentoCollegamento copiato negli appunti!
/var/lib/glusterd/glusterd.info
file.
- Stop the geo-replication session if configured by executing the following command.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the
glusterd
service on the server0.example.com.On RHEL 7 and RHEL 8, runOn RHEL 6, runsystemctl stop glusterd
# systemctl stop glusterd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service glusterd stop
# service glusterd stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide - Retrieve the UUID of the failed host (server0.example.com) from another of the Red Hat Gluster Storage Trusted Storage Pool by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the UUID of the failed host isb5ab2ec3-5411-45fa-a30f-43bd04caf96b
- Edit the
glusterd.info
file in the new host and include the UUID of the host you retrieved in the previous step.cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703
# cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The operating version of this node must be same as in other nodes of the trusted storage pool. - Select any host (say for example, server1.example.com) in the Red Hat Gluster Storage Trusted Storage Pool and retrieve its UUID from the
glusterd.info
file.grep -i uuid /var/lib/glusterd/glusterd.info
# grep -i uuid /var/lib/glusterd/glusterd.info UUID=8cc6377d-0153-4540-b965-a4015494461c
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Gather the peer information files from the host (server1.example.com) in the previous step. Execute the following command in that host (server1.example.com) of the cluster.
cp -a /var/lib/glusterd/peers /tmp/
# cp -a /var/lib/glusterd/peers /tmp/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the peer file corresponding to the failed host (server0.example.com) from the
/tmp/peers
directory.rm /tmp/peers/b5ab2ec3-5411-45fa-a30f-43bd04caf96b
# rm /tmp/peers/b5ab2ec3-5411-45fa-a30f-43bd04caf96b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the UUID corresponds to the UUID of the failed host (server0.example.com) retrieved in Step 3. - Archive all the files and copy those to the failed host(server0.example.com).
cd /tmp; tar -cvf peers.tar peers
# cd /tmp; tar -cvf peers.tar peers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the above created file to the new peer.
scp /tmp/peers.tar root@server0.example.com:/tmp
# scp /tmp/peers.tar root@server0.example.com:/tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the extracted content to the
/var/lib/glusterd/peers
directory. Execute the following command in the newly added host with the same name (server0.example.com) and IP Address.tar -xvf /tmp/peers.tar cp peers/* /var/lib/glusterd/peers/
# tar -xvf /tmp/peers.tar # cp peers/* /var/lib/glusterd/peers/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Select any other host in the cluster other than the node (server1.example.com) selected in step 5. Copy the peer file corresponding to the UUID of the host retrieved in Step 5 to the new host (server0.example.com) by executing the following command:
scp /var/lib/glusterd/peers/<UUID-retrieved-from-step5> root@Example1:/var/lib/glusterd/peers/
# scp /var/lib/glusterd/peers/<UUID-retrieved-from-step5> root@Example1:/var/lib/glusterd/peers/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the
glusterd
service.systemctl start glusterd
# systemctl start glusterd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If new brick has same hostname and same path, refer to Section 11.9.5, “Reconfiguring a Brick in a Volume”, and if it has different hostname and different brick path for replicated volumes then, refer to Section 11.9.2, “Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume”.
- In case of disperse volumes, when a new brick has different hostname and different brick path then, refer to Section 11.9.4, “Replacing an Old Brick with a New Brick on a Dispersed or Distributed-dispersed Volume”.
- Perform the self-heal operation on the restored volume.
gluster volume heal VOLNAME
# gluster volume heal VOLNAMEgluster volume heal VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You can view the gluster volume self-heal status by executing the following command:
gluster volume heal VOLNAME info
# gluster volume heal VOLNAME infogluster volume heal VOLNAME infogluster volume heal VOLNAME info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the geo-replication session is configured, perform the following steps:
- Setup the geo-replication session by generating the ssh keys:
gluster system:: execute gsec_create
# gluster system:: execute gsec_create
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create geo-replication session again with
force
option to distribute the keys from new nodes to Slave nodes.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After successfully setting up the shared storage volume, when a new node is replaced in the cluster, the shared storage is not mounted automatically on this node. Neither is the
/etc/fstab
entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo "<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
# mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo "<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .For more information on setting up shared storage volume, see Section 11.12, “Setting up Shared Storage Volume”. - Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the geo-replication session using
force
option:gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If there are only 2 hosts in the Red Hat Gluster Storage Trusted Storage Pool where the host server0.example.com must be replaced, perform the following steps:
- Stop the geo-replication session if configured by executing the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the
glusterd
service on server0.example.com.On RHEL 7 and RHEL 8, runOn RHEL 6, runsystemctl stop glusterd
# systemctl stop glusterd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service glusterd stop
# service glusterd stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide - Retrieve the UUID of the failed host (server0.example.com) from another peer in the Red Hat Gluster Storage Trusted Storage Pool by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the UUID of the failed host isb5ab2ec3-5411-45fa-a30f-43bd04caf96b
- Edit the
glusterd.info
file in the new host (server0.example.com) and include the UUID of the host you retrieved in the previous step.cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703
# cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The operating version of this node must be same as in other nodes of the trusted storage pool. - Create the peer file in the newly created host (server0.example.com) in /var/lib/glusterd/peers/<uuid-of-other-peer> with the name of the UUID of the other host (server1.example.com).UUID of the host can be obtained with the following:
gluster system:: uuid get
# gluster system:: uuid get
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.6. Example to obtain the UUID of a host
For example, # gluster system:: uuid get UUID: 1d9677dc-6159-405e-9319-ad85ec030880
For example, # gluster system:: uuid get UUID: 1d9677dc-6159-405e-9319-ad85ec030880
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this case the UUID of other peer is1d9677dc-6159-405e-9319-ad85ec030880
- Create a file
/var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880
in server0.example.com, with the following command:touch /var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880
# touch /var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file you create must contain the following information:UUID=<uuid-of-other-node> state=3 hostname=<hostname>
UUID=<uuid-of-other-node> state=3 hostname=<hostname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Continue to perform steps 12 to 18 as documented in the previous procedure.