検索

このコンテンツは選択した言語では利用できません。

10.5. Starting Geo-replication on a Newly Added Brick, Node, or Volume

download PDF

10.5.1. Starting Geo-replication for a New Brick or New Node

If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:
  1. Run the following command on the master node where key-based SSH authentication connection is configured, in order to create a common pem pub file.
    # gluster system:: execute gsec_create
  2. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
    For example:
    # gluster volume geo-replication Volume1 storage.backup.com::slave-vol create push-pem force

    Note

    There must be key-based SSH authentication access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
  3. After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:
    # mount -t glusterfs <local node's ip>:gluster_shared_storage
    /var/run/gluster/shared_storage
    # cp /etc/fstab /var/run/gluster/fstab.tmp
    # echo "<local node's ip>:/gluster_shared_storage
    /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
    For more information on setting up shared storage volume, see Section 11.12, “Setting up Shared Storage Volume”.
  4. Configure the meta-volume for geo-replication:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
    For example:
    # gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true
    For more information on configuring meta-volume, see Section 10.3.5, “Configuring a Meta-Volume”.
  5. If a node is added at slave, stop the geo-replication session using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  6. Start the geo-replication session between the slave and master forcefully, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
  7. Verify the status of the created session, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status

10.5.2. Starting Geo-replication for a New Brick on an Existing Node

When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.

10.5.3. Starting Geo-replication for a New Volume

To create and start a geo-replication session between a new volume added to the master cluster and a new volume added to the slave cluster, you must perform the following steps:

Prerequisites

  • There must be key-based SSH authentication without a password access between the master volume node and the slave volume node.
  1. Create the geo-replication session using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create
    For example:
    # gluster volume geo-replication Volume1 storage.backup.com::slave-vol create

    Note

    This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
  2. Configure the meta-volume for geo-replication:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
    For example:
    # gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true
    For more information on configuring meta-volume, see Section 10.3.5, “Configuring a Meta-Volume”.
  3. Start the geo-replication session between the slave and master, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
  4. Verify the status of the created session, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.