Ce contenu n'est pas disponible dans la langue sélectionnée.

10.5. Starting Geo-replication on a Newly Added Brick


10.5.1. Starting Geo-replication for a New Brick on a New Node

If a geo-replication session is running, and a brick is added to the volume on a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:

Starting Geo-replication for a New Brick on a New Node

  1. Run the following command on the master node where password-less SSH connection is configured, in order to create a common pem pub file.
    # gluster system:: execute gsec_create
    Copy to Clipboard Toggle word wrap
  2. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
    Copy to Clipboard Toggle word wrap
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol create push-pem force
    Copy to Clipboard Toggle word wrap

    Note

    There must be password-less SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
  3. Start the geo-replication session between the slave and master forcefully, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
    Copy to Clipboard Toggle word wrap
  4. Verify the status of the created session, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
    Copy to Clipboard Toggle word wrap
When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat