此内容没有您所选择的语言版本。
10.6. Starting Geo-replication on a Newly Added Brick, Node, or Volume
If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:
- Run the following command on the master node where key-based SSH authentication connection is configured, in order to create a common
pem pub
file.gluster system:: execute gsec_create
# gluster system:: execute gsec_create
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the geo-replication session using the following command. The
push-pem
andforce
options are required to perform the necessarypem-file
setup on the slave nodes.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 storage.backup.com::slave-vol create push-pem force
# gluster volume geo-replication Volume1 storage.backup.com::slave-vol create push-pem force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
There must be key-based SSH authentication access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. - After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the
/etc/fstab
entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:mount -t glusterfs <local node's ip>:gluster_shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo "<local node's ip>:/gluster_shared_storage
# mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo "<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .For more information on setting up shared storage volume, see Section 11.12, “Setting up Shared Storage Volume”. - Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on configuring meta-volume, see Section 10.3.5, “Configuring a Meta-Volume”. - If a node is added at slave, stop the geo-replication session using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the geo-replication session between the slave and master forcefully, using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the status of the created session, using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Warning
The following scenarios can lead to a checksum mismatch:- Adding bricks to expand a geo-replicated volume.
- Expanding the volume while the geo-replication synchronization is in progress.
- Newly added brick becomes `ACTIVE` to sync the data.
- Self healing on the new brick is not completed.
When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.
10.6.3. Starting Geo-replication for a New Volume 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
To create and start a geo-replication session between a new volume added to the master cluster and a new volume added to the slave cluster, you must perform the following steps:
Prerequisites
- There must be key-based SSH authentication without a password access between the master volume node and the slave volume node.
- Create the geo-replication session using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 storage.backup.com::slave-vol create
# gluster volume geo-replication Volume1 storage.backup.com::slave-vol create
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. - Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on configuring meta-volume, see Section 10.3.5, “Configuring a Meta-Volume”. - Start the geo-replication session between the slave and master, using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the status of the created session, using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow