Este conteúdo não está disponível no idioma selecionado.

10.5. Starting Geo-replication on a Newly Added Brick or Node


10.5.1. Starting Geo-replication for a New Brick or New Node

If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:
  1. Run the following command on the master node where passwordless SSH connection is configured, in order to create a common pem pub file.
    # gluster system:: execute gsec_create
    Copy to Clipboard Toggle word wrap
  2. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
    Copy to Clipboard Toggle word wrap
    For example:
    # gluster volume geo-replication Volume1 example.com::slave-vol create push-pem force
    Copy to Clipboard Toggle word wrap

    Note

    There must be passwordless SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
  3. After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:
    # mount -t glusterfs <local node's ip>:gluster_shared_storage
    /var/run/gluster/shared_storage
    # cp /etc/fstab /var/run/gluster/fstab.tmp
    # echo "<local node's ip>:/gluster_shared_storage
    /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstab
    Copy to Clipboard Toggle word wrap
    For more information on setting up shared storage volume, see Section 11.8, “Setting up Shared Storage Volume”.
  4. Configure the meta-volume for geo-replication:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
    Copy to Clipboard Toggle word wrap
    For example:
    # gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
    Copy to Clipboard Toggle word wrap
    For more information on configuring meta-volume, see Section 10.3.5, “Configuring a Meta-Volume”.
  5. If a node is added at slave, stop the geo-replication session using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
    Copy to Clipboard Toggle word wrap
  6. Start the geo-replication session between the slave and master forcefully, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forcegluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
    Copy to Clipboard Toggle word wrap
  7. Verify the status of the created session, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusgluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
    Copy to Clipboard Toggle word wrap

10.5.2. Starting Geo-replication for a New Brick on an Existing Node

When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat