Questo contenuto non è disponibile nella lingua selezionata.
14.3. Preparing to Deploy Geo-replication
14.3.1. Exploring Geo-replication Deployment Scenarios Copia collegamentoCollegamento copiato negli appunti!
- Geo-replication over LAN
- Geo-replication over WAN
- Geo-replication over the Internet
- Multi-site cascading geo-replication
14.3.2. Geo-replication Deployment Overview Copia collegamentoCollegamento copiato negli appunti!
- Verify that your environment matches the minimum system requirements. See Section 14.3.3, “Prerequisites”.
- Determine the appropriate deployment scenario. See Section 14.3.1, “Exploring Geo-replication Deployment Scenarios”.
- Start geo-replication on the master and slave systems. See Section 14.4, “Starting Geo-replication”.
14.3.3. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- The master and slave volumes must be of same version of Red Hat Gluster Storage instances.
- Slave node must not be a peer of the any of the nodes of the Master trusted storage pool.
- Passwordless SSH access is required between one node of the master volume (the node from which the
geo-replication createcommand will be executed), and one node of the slave volume (the node whose IP/hostname will be mentioned in the slave name when running thegeo-replication createcommand).Create the public and private keys usingssh-keygen(without passphrase) on the master node:ssh-keygen
# ssh-keygenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the public key to the slave node using the following command:ssh-copy-id -i identity_file root@slave_node_IPaddress/Hostname
# ssh-copy-id -i identity_file root@slave_node_IPaddress/HostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are setting up a non-root geo-replicaton session, then copy the public key to the respectiveuserlocation.Note
- Passwordless SSH access is required from the master node to slave node, whereas passwordless SSH access is not required from the slave node to master node. - ssh-copy-idcommand does not work ifssh authorized_keysfile is configured in the custom location. You must copy the contents of.ssh/id_rsa.pubfile from the Master and paste it to authorized_keys file in the custom location on the Slave node.A passwordless SSH connection is also required forgsyncdbetween every node in the master to every node in the slave. Thegluster system:: execute gsec_createcommand createssecret-pemfiles on all the nodes in the master, and is used to implement the passwordless SSH connection. Thepush-pemoption in thegeo-replication createcommand pushes these keys to all the nodes in the slave.For more information on thegluster system::execute gsec_createandpush-pemcommands, see Section 14.3.4.1, “Setting Up your Environment for Geo-replication Session”.
14.3.4. Setting Up your Environment Copia collegamentoCollegamento copiato negli appunti!
- Section 14.3.4.1, “Setting Up your Environment for Geo-replication Session” - In this method, the slave mount is owned by the root user.
- Section 14.3.4.2, “Setting Up your Environment for a Secure Geo-replication Slave” - This method is more secure as the slave mount is owned by a normal user.
- All the servers' time must be uniform on bricks of a geo-replicated master volume. It is recommended to set up a NTP (Network Time Protocol) service to keep the bricks' time synchronized, and avoid out-of-time sync effects.For example: In a replicated volume where brick1 of the master has the time 12:20, and brick2 of the master has the time 12:10 with a 10 minute time lag, all the changes on brick2 between in this period may go unnoticed during synchronization of files with a Slave.For more information on configuring NTP, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/ch-Configuring_NTP_Using_ntpd.html.
14.3.4.1. Setting Up your Environment for Geo-replication Session Copia collegamentoCollegamento copiato negli appunti!
Creating Geo-replication Sessions
- To create a common
pem pubfile, run the following command on the master node where the passwordless SSH connection is configured:gluster system:: execute gsec_create
# gluster system:: execute gsec_createCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the geo-replication session using the following command. The
push-pemoption is needed to perform the necessarypem-filesetup on the slave nodes.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force]
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 example.com::slave-vol create push-pem
# gluster volume geo-replication Volume1 example.com::slave-vol create push-pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
There must be passwordless SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. If the verification fails, you can use theforceoption which will ignore the failed verification and create a geo-replication session. - Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on configuring meta-volume, see Section 14.3.5, “Configuring a Meta-Volume”. - Start the geo-replication by running the following command on the master node:For example,
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start [force]
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start [force]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the status of the created session by running the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.4.2. Setting Up your Environment for a Secure Geo-replication Slave Copia collegamentoCollegamento copiato negli appunti!
mountbroker, an internal service of glusterd which manages the mounts for unprivileged slave accounts. You must perform additional steps to configure glusterd with the appropriate mountbroker's access control directives. The following example demonstrates this process:
- Create a new group. For example,
geogroup. - Create a unprivileged account. For example,
geoaccount. Addgeoaccountas a member ofgeogroupgroup. - As a root, create a new directory with permissions 0711 and with correct SELinux context. Ensure that the location where this directory is created is writeable only by root but
geoaccountis able to access it.For example,mkdir /var/mountbroker-root chmod 0711 /var/mountbroker-root semanage fcontext -a -e /home /var/mountbroker-root restorecon -Rv /var/mountbroker-root
# mkdir /var/mountbroker-root # chmod 0711 /var/mountbroker-root # semanage fcontext -a -e /home /var/mountbroker-root # restorecon -Rv /var/mountbroker-rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following commands in any one of the Slave node:
gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root gluster system:: execute mountbroker user geoaccount slavevol gluster system:: execute mountbroker opt geo-replication-log-group geogroup gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
# gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root # gluster system:: execute mountbroker user geoaccount slavevol # gluster system:: execute mountbroker opt geo-replication-log-group geogroup # gluster system:: execute mountbroker opt rpc-auth-allow-insecure onCopy to Clipboard Copied! Toggle word wrap Toggle overflow See Section 2.4, “Storage Concepts” for information onglusterd.volvolume file of a Red Hat Gluster Storage volume.If the above commands fails, check if theglusterd.volfile is available at/etc/glusterfs/directory. If not found, create aglusterd.volfile containing the default configuration and save it at/etc/glusterfs/directory. Now re-run the above commands listed above to get all the required geo-replication options.The following is the sampleglusterd.volfile along with default options:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you have multiple slave volumes on Slave, repeat Step 2 for each of them and run the following commands to update the vol file:
gluster system:: execute mountbroker user geoaccount2 slavevol2 gluster system:: execute mountbroker user geoaccount3 slavevol3
# gluster system:: execute mountbroker user geoaccount2 slavevol2 # gluster system:: execute mountbroker user geoaccount3 slavevol3Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can usegluster system:: execute mountbroker infocommand to view the configured mountbroker options. - You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of
mountbroker-geo-replication.geogroup. You can also have multiple options of the formmountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then run the following commands to add the corresponding options to the volfile:gluster system:: execute mountbroker user geoaccount1 slavevol11,slavevol12,slavevol13 gluster system:: execute mountbroker user geoaccount2 slavevol21,slavevol22 gluster system:: execute mountbroker user geoaccount3 slavevol31
# gluster system:: execute mountbroker user geoaccount1 slavevol11,slavevol12,slavevol13 # gluster system:: execute mountbroker user geoaccount2 slavevol21,slavevol22 # gluster system:: execute mountbroker user geoaccount3 slavevol31Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Restart
glusterdservice on all the Slave nodes.After you setup an auxiliary glusterFS mount for the unprivileged account on all the Slave nodes, perform the following steps to setup a non-root geo-replication session.: - Setup a passwordless SSH from one of the master node to the
useron one of the slave node.For example, to setup a passwordless SSH to the user geoaccount.ssh-keygen ssh-copy-id -i identity_file geoaccount@slave_node_IPaddress/Hostname
# ssh-keygen # ssh-copy-id -i identity_file geoaccount@slave_node_IPaddress/HostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a common pem pub file by running the following command on the master node where the passwordless SSH connection is configured to the
useron the slave node:gluster system:: execute gsec_create
# gluster system:: execute gsec_createCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a geo-replication relationship between master and slave to the
userby running the following command on the master node:For example,gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol create push-pem
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol create push-pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have multiple slave volumes and/or multiple accounts, create a geo-replication session with that particular user and volume.For example,gluster volume geo-replication MASTERVOL geoaccount2@SLAVENODE::slavevol2 create push-pem
# gluster volume geo-replication MASTERVOL geoaccount2@SLAVENODE::slavevol2 create push-pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow - In the slavenode, which is used to create relationship, run
/usr/libexec/glusterfs/set_geo_rep_pem_keys.shas a root with user name, master volume name, and slave volume names as the arguments.For example,/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount MASTERVOL SLAVEVOL_NAME
# /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount MASTERVOL SLAVEVOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on configuring meta-volume, see Section 14.3.5, “Configuring a Meta-Volume”. - Start the geo-replication with slave user by running the following command on the master node:For example,
gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol start
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the status of geo-replication session by running the following command on the master node:
gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
When mountbroker geo-replicaton session is deleted, use the following command to remove volumes per mountbroker user. If the volume to be removed is the last one for the mountbroker user, the user is also removed.
- To delete a volumes per mountbroker user:
gluster system:: execute mountbroker volumedel geoaccount2 slavevol2
# gluster system:: execute mountbroker volumedel geoaccount2 slavevol2Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can delete multiple volumes per mountbroker user by providing comma-separated list (without spaces) as the argument of this command.gluster system:: execute mountbroker volumedel geoaccount2 slavevol2,slavevol3
# gluster system:: execute mountbroker volumedel geoaccount2 slavevol2,slavevol3Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Important
gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status
geoaccount is the name of the unprivileged user account.
14.3.5. Configuring a Meta-Volume Copia collegamentoCollegamento copiato negli appunti!
gluster_shared_storage is created in the cluster, and is mounted at /var/run/gluster/shared_storage on all the nodes in the cluster. For more information on setting up shared storage volume, see Section 10.8, “Setting up Shared Storage Volume”.
- Configure the meta-volume for geo-replication:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow