此内容没有您所选择的语言版本。

14.5. Starting Geo-replication on a Newly Added Brick or Node


If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:
  1. Run the following command on the master node where passwordless SSH connection is configured, in order to create a common pem pub file.
    # gluster system:: execute gsec_create
    Copy to Clipboard Toggle word wrap
  2. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
    Copy to Clipboard Toggle word wrap
    For example:
    # gluster volume geo-replication Volume1 example.com::slave-vol create push-pem force
    Copy to Clipboard Toggle word wrap

    Note

    There must be passwordless SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
  3. After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:
    # mount -t glusterfs <local node's ip>:gluster_shared_storage
    /var/run/gluster/shared_storage
    # cp /etc/fstab /var/run/gluster/fstab.tmp
    # echo "<local node's ip>:/gluster_shared_storage
    /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
    Copy to Clipboard Toggle word wrap
    For more information on setting up shared storage volume, see Section 10.8, “Setting up Shared Storage Volume”.
  4. Configure the meta-volume for geo-replication:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
    Copy to Clipboard Toggle word wrap
    For example:
    # gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
    Copy to Clipboard Toggle word wrap
    For more information on configuring meta-volume, see Section 14.3.5, “Configuring a Meta-Volume”.
  5. If a node is added at slave, stop the geo-replication session using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
    Copy to Clipboard Toggle word wrap
  6. Start the geo-replication session between the slave and master forcefully, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
    Copy to Clipboard Toggle word wrap
  7. Verify the status of the created session, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
    Copy to Clipboard Toggle word wrap
When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat