4.13. Configuring Red Hat Gluster Storage
4.13.1. Peer the Nodes Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
From rhgs-primary-n01, peer nodes rhgs-primary-n{02..20}:
for i in {02..20};
do gluster peer probe rhgs-primary-n${i};
done
# for i in {02..20};
do gluster peer probe rhgs-primary-n${i};
done
From rhgs-primary-n02, re-peer node rhgs-primary-n01:
Note the problem. This step is done in order to clean up the initial peering process which leaves rhgs-primary-n01 defined by its IP address in the other peers Gluster trusted pool configuration files. This is important because the IP addresses are ephemeral:
gluster peer status | grep Hostname | grep -v rhgs Hostname: 10.240.21.133
# gluster peer status | grep Hostname | grep -v rhgs
Hostname: 10.240.21.133
And correct it:
gluster peer probe rhgs-primary-n01 peer probe: success. gluster peer status | grep Hostname | grep n01 Hostname: rhgs-primary-n01
# gluster peer probe rhgs-primary-n01
peer probe: success.
# gluster peer status | grep Hostname | grep n01
Hostname: rhgs-primary-n01
From rhgs--n01, peer nodes rhgs-secondary-n{02..10}:
for i in {02..10};
do gluster peer probe rhgs-secondary-n${i};
done
# for i in {02..10};
do gluster peer probe rhgs-secondary-n${i};
done
From rhgs-secondary-n02, peer node rhgs-secondary-n01:
gluster peer probe rhgs-secondary-n01
# gluster peer probe rhgs-secondary-n01
4.13.2. Creating Distribute-Replicate Volumes Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
Warning
Support for two-way replication is planned for deprecation and removal in future versions of Red Hat Gluster Storage. This will affect both replicated and distributed-replicated volumes.
Support is being removed because two-way replication does not provide adequate protection from split-brain conditions. While a dummy node can be used as an interim solution for this problem, Red Hat recommends that all volumes that currently use two-way replication are migrated to use either arbitrated replication or three-way replication.
Instructions for migrating a two-way replicated volume to an arbitrated replicated volume are available in Converting to an Arbitrated Volume.
Information about three-way replication is available in Creating Three-way Replicated Volumes and Creating Three-way Distributed Replicated Volumes.
On the primary trusted pool, create a 10x2 Distribute-Replicate volume, ensuring that bricks are paired appropriately with their replica peers as defined inSection 4.1.3, “Primary Storage Pool Configuration”.
The resulting Gluster volume topology is:
On the secondary trusted pool, create a 10-brick Distribute volume:
gluster volume start myvol-slave volume start: myvol-slave: success
# gluster volume start myvol-slave
volume start: myvol-slave: success
The resulting Gluster volume topology is:
4.13.3. Setting up Geo-Replication from the Primary to the Secondary Region Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
From a primary region node, establish geo-replication from the local
myvol volume to the remote region myvol-slave volume.
- As a prerequisite, all secondary/slave side nodes must allow root user login via SSH. The below commands should be run on all of nodes rhgs-secondary-n{01..10}.
sed -i s/PermitRootLogin\ no/PermitRootLogin\ yes/ \ /etc/ssh/sshd_config service sshd restart
# sed -i s/PermitRootLogin\ no/PermitRootLogin\ yes/ \ /etc/ssh/sshd_config # service sshd restartCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create an SSH key pair for the root user on rhgs-primary-n01, and copy the contents of the public key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On rhgs-secondary-n01, add the SSH public key from rhgs-primary-n01 to the root user’s authorized_keys file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The above SSH public key is for illustration purposes only. Use the key from your ownid_rsa.pubfile on rhgs-primary-n01.
At this point, the root user on rhgs-primary-n01 should have passwordless SSH access to rhgs-secondary-n01. This is a prerequisite for setting up geo-replication.
- Create a common pem pub file on rhgs-primary-n01:
Note
This must be done on the node where passwordless SSH to the secondary node was configured.gluster system:: execute gsec_create
# gluster system:: execute gsec_createCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the geo-replication session from the primary site to the secondary site. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes.
gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave create push-pem
# gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave create push-pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave start
# gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the geo-replication status. After a few minutes, the initialization stage should complete, and each connection should show Active or Passive for its status.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
At this point, the 100 TB Gluster volume is fully ready for use, with cross-zone synchronous data replication on the primary side and remote asynchronous data replication to a read-only volume on the secondary side located in a separate region.