このコンテンツは選択した言語では利用できません。
4.13. Configuring Red Hat Gluster Storage
4.13.1. Peer the Nodes
From rhgs-primary-n01, peer nodes rhgs-primary-n{02..20}:
for i in {02..20};
# for i in {02..20};
do gluster peer probe rhgs-primary-n${i};
done
From rhgs-primary-n02, re-peer node rhgs-primary-n01:
Note the problem. This step is done in order to clean up the initial peering process which leaves rhgs-primary-n01 defined by its IP address in the other peers Gluster trusted pool configuration files. This is important because the IP addresses are ephemeral:
gluster peer status | grep Hostname | grep -v rhgs
# gluster peer status | grep Hostname | grep -v rhgs
Hostname: 10.240.21.133
And correct it:
gluster peer probe rhgs-primary-n01 gluster peer status | grep Hostname | grep n01
# gluster peer probe rhgs-primary-n01
peer probe: success.
# gluster peer status | grep Hostname | grep n01
Hostname: rhgs-primary-n01
From rhgs--n01, peer nodes rhgs-secondary-n{02..10}:
for i in {02..10};
# for i in {02..10};
do gluster peer probe rhgs-secondary-n${i};
done
From rhgs-secondary-n02, peer node rhgs-secondary-n01:
gluster peer probe rhgs-secondary-n01
# gluster peer probe rhgs-secondary-n01
4.13.2. Creating Distribute-Replicate Volumes
On the primary trusted pool, create a 10x2 Distribute-Replicate volume, ensuring that bricks are paired appropriately with their replica peers as defined inSection 4.1.3, “Primary Storage Pool Configuration”.
gluster volume create myvol replica 2 \ rhgs-primary-n01:/rhgs/bricks/myvol rhgs-primary-n02:/rhgs/bricks/myvol \ rhgs-primary-n03:/rhgs/bricks/myvol rhgs-primary-n04:/rhgs/bricks/myvol \ rhgs-primary-n05:/rhgs/bricks/myvol rhgs-primary-n06:/rhgs/bricks/myvol \ rhgs-primary-n07:/rhgs/bricks/myvol rhgs-primary-n08:/rhgs/bricks/myvol \ rhgs-primary-n09:/rhgs/bricks/myvol rhgs-primary-n10:/rhgs/bricks/myvol \ rhgs-primary-n11:/rhgs/bricks/myvol rhgs-primary-n12:/rhgs/bricks/myvol \ rhgs-primary-n13:/rhgs/bricks/myvol rhgs-primary-n14:/rhgs/bricks/myvol \ rhgs-primary-n15:/rhgs/bricks/myvol rhgs-primary-n16:/rhgs/bricks/myvol \ rhgs-primary-n17:/rhgs/bricks/myvol rhgs-primary-n18:/rhgs/bricks/myvol \ rhgs-primary-n19:/rhgs/bricks/myvol rhgs-primary-n20:/rhgs/bricks/myvol gluster volume start myvol gluster volume info myvol
# gluster volume create myvol replica 2 \
rhgs-primary-n01:/rhgs/bricks/myvol rhgs-primary-n02:/rhgs/bricks/myvol \
rhgs-primary-n03:/rhgs/bricks/myvol rhgs-primary-n04:/rhgs/bricks/myvol \
rhgs-primary-n05:/rhgs/bricks/myvol rhgs-primary-n06:/rhgs/bricks/myvol \
rhgs-primary-n07:/rhgs/bricks/myvol rhgs-primary-n08:/rhgs/bricks/myvol \
rhgs-primary-n09:/rhgs/bricks/myvol rhgs-primary-n10:/rhgs/bricks/myvol \
rhgs-primary-n11:/rhgs/bricks/myvol rhgs-primary-n12:/rhgs/bricks/myvol \
rhgs-primary-n13:/rhgs/bricks/myvol rhgs-primary-n14:/rhgs/bricks/myvol \
rhgs-primary-n15:/rhgs/bricks/myvol rhgs-primary-n16:/rhgs/bricks/myvol \
rhgs-primary-n17:/rhgs/bricks/myvol rhgs-primary-n18:/rhgs/bricks/myvol \
rhgs-primary-n19:/rhgs/bricks/myvol rhgs-primary-n20:/rhgs/bricks/myvol
volume create: myvol: success: please start the volume to access data
# gluster volume start myvol
volume start: myvol: success
# gluster volume info myvol
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: f093e120-b291-4362-a859-8d2d4dd87f3a
Status: Started
Snap Volume: no
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: rhgs-primary-n01:/rhgs/bricks/myvol
Brick2: rhgs-primary-n02:/rhgs/bricks/myvol
Brick3: rhgs-primary-n03:/rhgs/bricks/myvol
Brick4: rhgs-primary-n04:/rhgs/bricks/myvol
Brick5: rhgs-primary-n05:/rhgs/bricks/myvol
Brick6: rhgs-primary-n06:/rhgs/bricks/myvol
Brick7: rhgs-primary-n07:/rhgs/bricks/myvol
Brick8: rhgs-primary-n08:/rhgs/bricks/myvol
Brick9: rhgs-primary-n09:/rhgs/bricks/myvol
Brick10: rhgs-primary-n10:/rhgs/bricks/myvol
Brick11: rhgs-primary-n11:/rhgs/bricks/myvol
Brick12: rhgs-primary-n12:/rhgs/bricks/myvol
Brick13: rhgs-primary-n13:/rhgs/bricks/myvol
Brick14: rhgs-primary-n14:/rhgs/bricks/myvol
Brick15: rhgs-primary-n15:/rhgs/bricks/myvol
Brick16: rhgs-primary-n16:/rhgs/bricks/myvol
Brick17: rhgs-primary-n17:/rhgs/bricks/myvol
Brick18: rhgs-primary-n18:/rhgs/bricks/myvol
Brick19: rhgs-primary-n19:/rhgs/bricks/myvol
Brick20: rhgs-primary-n20:/rhgs/bricks/myvol
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
The resulting Gluster volume topology is:
Distribute set | +-- Replica set 0 | | | +-- Brick 0: rhgs-primary-n01:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n02:/rhgs/bricks/myvol | +-- Replica set 1 | | | +-- Brick 0: rhgs-primary-n03:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n04:/rhgs/bricks/myvol | +-- Replica set 2 | | | +-- Brick 0: rhgs-primary-n05:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n06:/rhgs/bricks/myvol | +-- Replica set 3 | | | +-- Brick 0: rhgs-primary-n07:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n08:/rhgs/bricks/myvol | +-- Replica set 4 | | | +-- Brick 0: rhgs-primary-n09:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n10:/rhgs/bricks/myvol | +-- Replica set 5 | | | +-- Brick 0: rhgs-primary-n11:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n12:/rhgs/bricks/myvol | +-- Replica set 6 | | | +-- Brick 0: rhgs-primary-n13:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n14:/rhgs/bricks/myvol | +-- Replica set 7 | | | +-- Brick 0: rhgs-primary-n15:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n16:/rhgs/bricks/myvol | +-- Replica set 8 | | | +-- Brick 0: rhgs-primary-n17:/rhgs/bricks/myvol | | | +-- Brick 1: rhgs-primary-n18:/rhgs/bricks/myvol | +-- Replica set 9 | +-- Brick 0: rhgs-primary-n19:/rhgs/bricks/myvol | +-- Brick 1: rhgs-primary-n20:/rhgs/bricks/myvol
Distribute set
|
+-- Replica set 0
| |
| +-- Brick 0: rhgs-primary-n01:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n02:/rhgs/bricks/myvol
|
+-- Replica set 1
| |
| +-- Brick 0: rhgs-primary-n03:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n04:/rhgs/bricks/myvol
|
+-- Replica set 2
| |
| +-- Brick 0: rhgs-primary-n05:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n06:/rhgs/bricks/myvol
|
+-- Replica set 3
| |
| +-- Brick 0: rhgs-primary-n07:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n08:/rhgs/bricks/myvol
|
+-- Replica set 4
| |
| +-- Brick 0: rhgs-primary-n09:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n10:/rhgs/bricks/myvol
|
+-- Replica set 5
| |
| +-- Brick 0: rhgs-primary-n11:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n12:/rhgs/bricks/myvol
|
+-- Replica set 6
| |
| +-- Brick 0: rhgs-primary-n13:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n14:/rhgs/bricks/myvol
|
+-- Replica set 7
| |
| +-- Brick 0: rhgs-primary-n15:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n16:/rhgs/bricks/myvol
|
+-- Replica set 8
| |
| +-- Brick 0: rhgs-primary-n17:/rhgs/bricks/myvol
| |
| +-- Brick 1: rhgs-primary-n18:/rhgs/bricks/myvol
|
+-- Replica set 9
|
+-- Brick 0: rhgs-primary-n19:/rhgs/bricks/myvol
|
+-- Brick 1: rhgs-primary-n20:/rhgs/bricks/myvol
On the secondary trusted pool, create a 10-brick Distribute volume:
gluster volume create myvol-slave \ rhgs-secondary-n01:/rhgs/bricks/myvol-slave \ rhgs-secondary-n02:/rhgs/bricks/myvol-slave \ rhgs-secondary-n03:/rhgs/bricks/myvol-slave \ rhgs-secondary-n04:/rhgs/bricks/myvol-slave \ rhgs-secondary-n05:/rhgs/bricks/myvol-slave \ rhgs-secondary-n06:/rhgs/bricks/myvol-slave \ rhgs-secondary-n07:/rhgs/bricks/myvol-slave \ rhgs-secondary-n08:/rhgs/bricks/myvol-slave \ rhgs-secondary-n09:/rhgs/bricks/myvol-slave \ rhgs-secondary-n10:/rhgs/bricks/myvol-slave
# gluster volume create myvol-slave \
rhgs-secondary-n01:/rhgs/bricks/myvol-slave \
rhgs-secondary-n02:/rhgs/bricks/myvol-slave \
rhgs-secondary-n03:/rhgs/bricks/myvol-slave \
rhgs-secondary-n04:/rhgs/bricks/myvol-slave \
rhgs-secondary-n05:/rhgs/bricks/myvol-slave \
rhgs-secondary-n06:/rhgs/bricks/myvol-slave \
rhgs-secondary-n07:/rhgs/bricks/myvol-slave \
rhgs-secondary-n08:/rhgs/bricks/myvol-slave \
rhgs-secondary-n09:/rhgs/bricks/myvol-slave \
rhgs-secondary-n10:/rhgs/bricks/myvol-slave
volume create: myvol-slave: success: please start the volume to access data
gluster volume start myvol-slave
# gluster volume start myvol-slave
volume start: myvol-slave: success
gluster volume info myvol-slave
# gluster volume info myvol-slave
Volume Name: myvol-slave
Type: Distribute
Volume ID: 64295b00-ac19-436c-9aac-6069e0a5b8cf
Status: Started
Snap Volume: no
Number of Bricks: 10
Transport-type: tcp
Bricks:
Brick1: rhgs-secondary-n01:/rhgs/bricks/myvol-slave
Brick2: rhgs-secondary-n02:/rhgs/bricks/myvol-slave
Brick3: rhgs-secondary-n03:/rhgs/bricks/myvol-slave
Brick4: rhgs-secondary-n04:/rhgs/bricks/myvol-slave
Brick5: rhgs-secondary-n05:/rhgs/bricks/myvol-slave
Brick6: rhgs-secondary-n06:/rhgs/bricks/myvol-slave
Brick7: rhgs-secondary-n07:/rhgs/bricks/myvol-slave
Brick8: rhgs-secondary-n08:/rhgs/bricks/myvol-slave
Brick9: rhgs-secondary-n09:/rhgs/bricks/myvol-slave
Brick10: rhgs-secondary-n10:/rhgs/bricks/myvol-slave
Options Reconfigured:
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
The resulting Gluster volume topology is:
Distribute set | +-- Brick 0: rhgs-secondary-n01:/rhgs/bricks/myvol-slave | +-- Brick 1: rhgs-secondary-n02:/rhgs/bricks/myvol-slave | +-- Brick 2: rhgs-secondary-n03:/rhgs/bricks/myvol-slave | +-- Brick 3: rhgs-secondary-n04:/rhgs/bricks/myvol-slave | +-- Brick 4: rhgs-secondary-n05:/rhgs/bricks/myvol-slave | +-- Brick 5: rhgs-secondary-n06:/rhgs/bricks/myvol-slave | +-- Brick 6: rhgs-secondary-n07:/rhgs/bricks/myvol-slave | +-- Brick 7: rhgs-secondary-n08:/rhgs/bricks/myvol-slave | +-- Brick 8: rhgs-secondary-n09:/rhgs/bricks/myvol-slave | +-- Brick 9: rhgs-secondary-n10:/rhgs/bricks/myvol-slave
Distribute set
|
+-- Brick 0: rhgs-secondary-n01:/rhgs/bricks/myvol-slave
|
+-- Brick 1: rhgs-secondary-n02:/rhgs/bricks/myvol-slave
|
+-- Brick 2: rhgs-secondary-n03:/rhgs/bricks/myvol-slave
|
+-- Brick 3: rhgs-secondary-n04:/rhgs/bricks/myvol-slave
|
+-- Brick 4: rhgs-secondary-n05:/rhgs/bricks/myvol-slave
|
+-- Brick 5: rhgs-secondary-n06:/rhgs/bricks/myvol-slave
|
+-- Brick 6: rhgs-secondary-n07:/rhgs/bricks/myvol-slave
|
+-- Brick 7: rhgs-secondary-n08:/rhgs/bricks/myvol-slave
|
+-- Brick 8: rhgs-secondary-n09:/rhgs/bricks/myvol-slave
|
+-- Brick 9: rhgs-secondary-n10:/rhgs/bricks/myvol-slave
4.13.3. Setting up Geo-Replication from the Primary to the Secondary Region
From a primary region node, establish geo-replication from the local
myvol
volume to the remote region myvol-slave
volume.
- As a prerequisite, all secondary/slave side nodes must allow root user login via SSH. The below commands should be run on all of nodes rhgs-secondary-n{01..10}.
sed -i s/PermitRootLogin\ no/PermitRootLogin\ yes/ \ /etc/ssh/sshd_config service sshd restart
# sed -i s/PermitRootLogin\ no/PermitRootLogin\ yes/ \ /etc/ssh/sshd_config # service sshd restart
Copy to Clipboard Copied! - Create an SSH key pair for the root user on rhgs-primary-n01, and copy the contents of the public key:
ssh-keygen cat ~root/.ssh/id_rsa.pub
# ssh-keygen # cat ~root/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmtzZdIR+pEl16LqH0kbGQfA7sTe1iWHhV/x+5zVDb91Z+gzMVdBTBaLyugeoBlxzOeFFnc/7a9TwNSr7YWt/yKZxh+lnqq /9xcWtONUrfvLH4TEWu4dlRwCvXGsdv23lQK0YabaY9hqzshscFtSnQTmzT13LPc9drH+k7lHBu4KjA4igDvX/j41or0weneg1vcqAP9vRyh4xXgtocqBiAqJegBZ5O /QO1ynyJBysp7tIHF7HZuh3sFCxtqEPPsJkVJDiQZ/NqTr3hAqDzmn4USOX3FbSOvomlWa8We6tGb9nfUH6vBQGyKbWk4YOzm6E5oTzuRBGA1vCPmwpwR/cw== root@rhgs-primary-n01
Copy to Clipboard Copied! - On rhgs-secondary-n01, add the SSH public key from rhgs-primary-n01 to the root user’s authorized_keys file:
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmtzZdIR+pEl16LqH0kbGQfA7sTe1iWHhV/x+5zVDb91Z+gzMVdBTBaLyugeoBlxzOeFFnc/7a9TwNSr7YWt
# echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmtzZdIR+pEl16LqH0kbGQfA7sTe1iWHhV/x+5zVDb91Z+gzMVdBTBaLyugeoBlxzOeFFnc/7a9TwNSr7YWt /yKZxh+lnqq7/9xcWtONUrfvLH4TEWu4dlRwCvXGsdv23lQK0YabaY9hqzshscFtSnQTmzT13LPc9drH+k7lHBu4KjA4igDvX j41or0weneg1vcqAP9vRyh4xXgtocqBiAqJegBZ5O/QO1ynyJBysp7tIHF7HZuh3sFCxtqEPPsJkVJDiQZ /NqTr3hAqDzmn4USOX3FbSOvomlWa8We6tGb9nfUH6vBQGyKbWk4YOzm6E5oTzuRBGA1vCPmwpwR/cw== root@rhgs-primary-n01" | sudo tee ~root/.ssh /authorized_keys > /dev/null
Copy to Clipboard Copied! Note
The above SSH public key is for illustration purposes only. Use the key from your ownid_rsa.pub
file on rhgs-primary-n01.
At this point, the root user on rhgs-primary-n01 should have passwordless SSH access to rhgs-secondary-n01. This is a prerequisite for setting up geo-replication.
- Create a common pem pub file on rhgs-primary-n01:
Note
This must be done on the node where passwordless SSH to the secondary node was configured.gluster system:: execute gsec_create
# gluster system:: execute gsec_create
Copy to Clipboard Copied! - Create the geo-replication session from the primary site to the secondary site. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes.
gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave create push-pem
# gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave create push-pem
Copy to Clipboard Copied! gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave start
# gluster volume geo-replication myvol \ rhgs-secondary-n01::myvol-slave start
Copy to Clipboard Copied! - Verify the geo-replication status. After a few minutes, the initialization stage should complete, and each connection should show Active or Passive for its status.
gluster volume geo-replication myvol rhgs-secondary-n01::myvol-slave status
# gluster volume geo-replication myvol rhgs-secondary-n01::myvol-slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ rhgs-primary-n01 myvol /rhgs/bricks/myvol root rhgs-secondary-n10::myvol-slave Active N/A Changelog Crawl rhgs-primary-n18 myvol /rhgs/bricks/myvol root rhgs-secondary-n05::myvol-slave Passive N/A N/A rhgs-primary-n06 myvol /rhgs/bricks/myvol root rhgs-secondary-n07::myvol-slave Passive N/A N/A rhgs-primary-n02 myvol /rhgs/bricks/myvol root rhgs-secondary-n02::myvol-slave Passive N/A N/A rhgs-primary-n10 myvol /rhgs/bricks/myvol root rhgs-secondary-n09::myvol-slave Passive N/A N/A rhgs-primary-n14 myvol /rhgs/bricks/myvol root rhgs-secondary-n01::myvol-slave Passive N/A N/A rhgs-primary-n03 myvol /rhgs/bricks/myvol root rhgs-secondary-n03::myvol-slave Active N/A Changelog Crawl rhgs-primary-n09 myvol /rhgs/bricks/myvol root rhgs-secondary-n08::myvol-slave Active N/A Changelog Crawl rhgs-primary-n11 myvol /rhgs/bricks/myvol root rhgs-secondary-n10::myvol-slave Active N/A Changelog Crawl rhgs-primary-n13 myvol /rhgs/bricks/myvol root rhgs-secondary-n03::myvol-slave Active N/A Changelog Crawl rhgs-primary-n19 myvol /rhgs/bricks/myvol root rhgs-secondary-n08::myvol-slave Active N/A Changelog Crawl rhgs-primary-n17 myvol /rhgs/bricks/myvol root rhgs-secondary-n04::myvol-slave Active N/A Changelog Crawl rhgs-primary-n05 myvol /rhgs/bricks/myvol root rhgs-secondary-n06::myvol-slave Active N/A Changelog Crawl rhgs-primary-n15 myvol /rhgs/bricks/myvol root rhgs-secondary-n06::myvol-slave Active N/A Changelog Crawl rhgs-primary-n16 myvol /rhgs/bricks/myvol root rhgs-secondary-n07::myvol-slave Passive N/A N/A rhgs-primary-n07 myvol /rhgs/bricks/myvol root rhgs-secondary-n04::myvol-slave Active N/A Changelog Crawl rhgs-primary-n20 myvol /rhgs/bricks/myvol root rhgs-secondary-n09::myvol-slave Passive N/A N/A rhgs-primary-n12 myvol /rhgs/bricks/myvol root rhgs-secondary-n02::myvol-slave Passive N/A N/A rhgs-primary-n04 myvol /rhgs/bricks/myvol root rhgs-secondary-n01::myvol-slave Passive N/A N/A rhgs-primary-n08 myvol /rhgs/bricks/myvol root rhgs-secondary-n05::myvol-slave Passive N/A N/A
Copy to Clipboard Copied!
At this point, the 100 TB Gluster volume is fully ready for use, with cross-zone synchronous data replication on the primary side and remote asynchronous data replication to a read-only volume on the secondary side located in a separate region.