8.10. Managing Split-brain
- Data split-brain: Contents of the file under split-brain are different in different replica pairs and automatic healing is not possible.
- Metadata split-brain : The metadata of the files (example, user defined extended attribute) are different and automatic healing is not possible.
- Entry split-brain: This happens when a file have different gfids on each of the replica pair.
8.10.1. Preventing Split-brain
8.10.1.1. Configuring Server-Side Quorum
cluster.server-quorum-type
volume option as server
. For more information on this volume option, see Section 8.1, “Configuring Volume Options”.
glusterd
service. Whenever the glusterd
service on a machine observes that the quorum is not met, it brings down the bricks to prevent data split-brain. When the network connections are brought back up and the quorum is restored, the bricks in the volume are brought back up. When the quorum is not met for a volume, any commands that update the volume configuration or peer addition or detach are not allowed. It is to be noted that both, the glusterd
service not running and the network connection between two machines being down are treated equally.
# gluster volume set all cluster.server-quorum-ratio PERCENTAGE
# gluster volume set all cluster.server-quorum-ratio 51%
# gluster volume set VOLNAME cluster.server-quorum-type server
Important
8.10.1.2. Configuring Client-Side Quorum
m
of n
replica groups only m
replica groups becomes read-only and the rest of the replica groups continue to allow data modifications.
Example 8.8. Client-Side Quorum
A
, only replica group A
becomes read-only. Replica groups B
and C
continue to allow data modifications.
Important
- If
cluster.quorum-type
isfixed
, writes will continue till number of bricks up and running in replica pair is equal to the count specified incluster.quorum-count
option. This is irrespective of first or second or third brick. All the bricks are equivalent here. - If
cluster.quorum-type
isauto
, then at least ceil (n/2) number of bricks need to be up to allow writes, wheren
is the replica count. For example,for replica 2, ceil(2/2)= 1 brick for replica 3, ceil(3/2)= 2 bricks for replica 4, ceil(4/2)= 2 bricks for replica 5, ceil(5/2)= 3 bricks for replica 6, ceil(6/2)= 3 bricks and so on
In addition, forauto
, if the number of bricks that are up is exactly ceil (n/2), andn
is an even number, then the first brick of the replica must also be up to allow writes. For replica 6, if more than 3 bricks are up, then it can be any of the bricks. But if exactly 3 bricks are up, then the first brick has to be up and running. - In a three-way replication setup, it is recommended to set
cluster.quorum-type
toauto
to avoid split brains. If the quorum is not met, the replica pair becomes read-only.
cluster.quorum-type
and cluster.quorum-count
options. For more information on these options, see Section 8.1, “Configuring Volume Options”.
Important
gluster volume set VOLNAME group virt
command. If on a two replica set up, if the first brick in the replica pair is offline, virtual machines will be paused because quorum is not met and writes are disallowed.
# gluster volume reset VOLNAME quorum-type
This example provides information on how to set server-side and client-side quorum on a Distribute Replicate volume to avoid split-brain scenario. The configuration of this example has 2 X 2 ( 4 bricks) Distribute Replicate setup.
# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 0df52d58-bded-4e5d-ac37-4c82f7c89cfh Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/bricks/brick1 Brick2: server2:/bricks/brick2 Brick3: server3:/bricks/brick3 Brick4: server4:/bricks/brick4
# gluster volume set VOLNAME cluster.server-quorum-type server
# gluster volume set all cluster.server-quorum-ratio 51%
quorum-type
option to auto
to allow writes to the file only if the percentage of active replicate bricks is more than 50% of the total number of bricks that constitute that replica.
# gluster volume set VOLNAME quorum-type auto
Important
n
) in a replica set is an even number, it is mandatory that the n/2
count must consist of the primary brick and it must be up and running. If n
is an odd number, the n/2
count can have any brick up and running, that is, the primary brick need not be up and running to allow writes.
8.10.2. Recovering from File Split-brain
Steps to recover from a file split-brain
- Run the following command to obtain the path of the file that is in split-brain:
# gluster volume heal VOLNAME info split-brain
From the command output, identify the files for which file operations performed from the client keep failing with Input/Output error. - Close the applications that opened split-brain file from the mount point. If you are using a virtual machine, you must power off the machine.
- Obtain and verify the AFR changelog extended attributes of the file using the
getfattr
command. Then identify the type of split-brain to determine which of the bricks contains the 'good copy' of the file.getfattr -d -m . -e hex <file-path-on-brick>
For example,# getfattr -d -e hex -m. brick-a/file.txt \#file: brick-a/file.txt security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-2=0x000000000000000000000000 trusted.afr.vol-client-3=0x000000000200000000000000 trusted.gfid=0x307a5c9efddd4e7c96e94fd4bcdcbd1b
The extended attributes withtrusted.afr.VOLNAMEvolname-client-<subvolume-index>
are used by AFR to maintain changelog of the file. The values of thetrusted.afr.VOLNAMEvolname-client-<subvolume-index>
are calculated by the glusterFS client (FUSE or NFS-server) processes. When the glusterFS client modifies a file or directory, the client contacts each brick and updates the changelog extended attribute according to the response of the brick.subvolume-index
is thebrick number - 1
ofgluster volume info VOLNAME
output.For example,# gluster volume info vol Volume Name: vol Type: Distributed-Replicate Volume ID: 4f2d7849-fbd6-40a2-b346-d13420978a01 Status: Created Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: brick-a: server1:/gfs/brick-a brick-b: server1:/gfs/brick-b brick-c: server1:/gfs/brick-c brick-d: server1:/gfs/brick-d brick-e: server1:/gfs/brick-e brick-f: server1:/gfs/brick-f brick-g: server1:/gfs/brick-g brick-h: server1:/gfs/brick-h
In the example above:Brick | Replica set | Brick subvolume index ---------------------------------------------------------------------------- -/gfs/brick-a | 0 | 0 -/gfs/brick-b | 0 | 1 -/gfs/brick-c | 1 | 2 -/gfs/brick-d | 1 | 3 -/gfs/brick-e | 2 | 4 -/gfs/brick-f | 2 | 5 -/gfs/brick-g | 3 | 6 -/gfs/brick-h | 3 | 7 ```
Each file in a brick maintains the changelog of itself and that of the files present in all the other bricks in it's replica set as seen by that brick.In the example volume given above, all files in brick-a will have 2 entries, one for itself and the other for the file present in it's replica pair. The following is the changelog for brick-b,- trusted.afr.vol-client-0=0x000000000000000000000000 - is the changelog for itself (brick-a)
- trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for brick-b as seen by brick-a
Likewise, all files in brick-b will have the following:- trusted.afr.vol-client-0=0x000000000000000000000000 - changelog for brick-a as seen by brick-b
- trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for itself (brick-b)
The same can be extended for other replica pairs.Interpreting changelog (approximate pending operation count) valueEach extended attribute has a value which is 24 hexa decimal digits. First 8 digits represent changelog of data. Second 8 digits represent changelog of metadata. Last 8 digits represent Changelog of directory entries.
Pictorially representing the same is as follows:0x 000003d7 00000001 00000000110 | | | | | \_ changelog of directory entries | \_ changelog of metadata \ _ changelog of data
For directories, metadata and entry changelogs are valid. For regular files, data and metadata changelogs are valid. For special files like device files and so on, metadata changelog is valid. When a file split-brain happens it could be either be data split-brain or meta-data split-brain or both.The following is an example of both data, metadata split-brain on the same file:# getfattr -d -m . -e hex /gfs/brick-?/a getfattr: Removing leading '/' from absolute path names \#file: gfs/brick-a/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000100000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 \#file: gfs/brick-b/a trusted.afr.vol-client-0=0x000003b00000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57
Scrutinize the changelogsThe changelog extended attributes on file/gfs/brick-a/a
are as follows:- The first 8 digits of
trusted.afr.vol-client-0 are all zeros (0x00000000................)
,The first 8 digits oftrusted.afr.vol-client-1
are not all zeros (0x000003d7................).So the changelog on/gfs/brick-a/a
implies that some data operations succeeded on itself but failed on/gfs/brick-b/a
. - The second 8 digits of
trusted.afr.vol-client-0 are all zeros (0x........00000000........)
, and the second 8 digits oftrusted.afr.vol-client-1
are not all zeros (0x........00000001........).So the changelog on/gfs/brick-a/a
implies that some metadata operations succeeded on itself but failed on/gfs/brick-b/a
.
The changelog extended attributes on file/gfs/brick-b/a
are as follows:- The first 8 digits of
trusted.afr.vol-client-0
are not all zeros (0x000003b0................).The first 8 digits oftrusted.afr.vol-client-1
are all zeros (0x00000000................).So the changelog on/gfs/brick-b/a
implies that some data operations succeeded on itself but failed on/gfs/brick-a/a
. - The second 8 digits of
trusted.afr.vol-client-0
are not all zeros (0x........00000001........)The second 8 digits oftrusted.afr.vol-client-1
are all zeros (0x........00000000........).So the changelog on/gfs/brick-b/a
implies that some metadata operations succeeded on itself but failed on/gfs/brick-a/a
.
Here, both the copies have data, metadata changes that are not on the other file. Hence, it is both data and metadata split-brain.Deciding on the correct copyYou must inspect
stat
andgetfattr
output of the files to decide which metadata to retain and contents of the file to decide which data to retain. To continue with the example above, here, we are retaining the data of/gfs/brick-a/a
and metadata of/gfs/brick-b/a
.Resetting the relevant changelogs to resolve the split-brainResolving data split-brainYou must change the changelog extended attributes on the files as if some data operations succeeded on
/gfs/brick-a/a
but failed on /gfs/brick-b/a. But/gfs/brick-b/a
shouldnot
have any changelog showing data operations succeeded on/gfs/brick-b/a
but failed on/gfs/brick-a/a
. You must reset the data part of the changelog ontrusted.afr.vol-client-0
of/gfs/brick-b/a
.Resolving metadata split-brainYou must change the changelog extended attributes on the files as if some metadata operations succeeded on/gfs/brick-b/a
but failed on/gfs/brick-a/a
. But/gfs/brick-a/a
shouldnot
have any changelog which says some metadata operations succeeded on/gfs/brick-a/a
but failed on/gfs/brick-b/a
. You must reset metadata part of the changelog ontrusted.afr.vol-client-1
of/gfs/brick-a/a
Run the following commands to reset the extended attributes.- On
/gfs/brick-b/a
, fortrusted.afr.vol-client-0 0x000003b00000000100000000
to0x000000000000000100000000
, execute the following command:# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
- On
/gfs/brick-a/a
, fortrusted.afr.vol-client-1 0x0000000000000000ffffffff
to0x000003d70000000000000000
, execute the following command:# setfattr -n trusted.afr.vol-client-1 -v 0x000003d70000000000000000 /gfs/brick-a/a
After you reset the extended attributes, the changelogs would look similar to the following:# getfattr -d -m . -e hex /gfs/brick-?/a getfattr: Removing leading '/' from absolute path names \#file: gfs/brick-a/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 \#file: gfs/brick-b/a trusted.afr.vol-client-0=0x000000000000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57
Resolving Directory entry split-brainAFR has the ability to conservatively merge different entries in the directories when there is a split-brain on directory. If on one brick directory
storage
has entries1
,2
and has entries3
,4
on the other brick then AFR will merge all of the entries in the directory to have1, 2, 3, 4
entries in the same directory. But this may result in deleted files to re-appear in case the split-brain happens because of deletion of files on the directory. Split-brain resolution needs human intervention when there is at least one entry which has same file name but differentgfid
in that directory.For example:Onbrick-a
the directory has 2 entriesfile1
withgfid_x
andfile2
. Onbrick-b
directory has 2 entriesfile1
withgfid_y
andfile3
. Here the gfid's offile1
on the bricks are different. These kinds of directory split-brain needs human intervention to resolve the issue. You must remove eitherfile1
onbrick-a
or thefile1
onbrick-b
to resolve the split-brain.In addition, the correspondinggfid-link
file must be removed. Thegfid-link
files are present in the .glusterfs
directory in the top-level directory of the brick. If the gfid of the file is0x307a5c9efddd4e7c96e94fd4bcdcbd1b
(the trusted.gfid extended attribute received from thegetfattr
command earlier), the gfid-link file can be found at/gfs/brick-a/.glusterfs/30/7a/307a5c9efddd4e7c96e94fd4bcdcbd1b
.Warning
Before deleting thegfid-link
, you must ensure that there are no hard links to the file present on that brick. If hard-links exist, you must delete them. - Trigger self-heal by running the following command:
# ls -l <file-path-on-gluster-mount>
or# gluster volume heal VOLNAME
8.10.3. Triggering Self-Healing on Replicated Volumes
- To view the list of files that need healing:
# gluster volume heal VOLNAME info
For example, to view the list of files on test-volume that need healing:# gluster volume heal test-volume info Brick server1:/gfs/test-volume_0 Number of entries: 0 Brick server2:/gfs/test-volume_1 /95.txt /32.txt /66.txt /35.txt /18.txt /26.txt - Possibly undergoing heal /47.txt /55.txt /85.txt - Possibly undergoing heal ... Number of entries: 101
- To trigger self-healing only on the files which require healing:
# gluster volume heal VOLNAME
For example, to trigger self-healing on files which require healing on test-volume:# gluster volume heal test-volume Heal operation on volume test-volume has been successful
- To trigger self-healing on all the files on a volume:
# gluster volume heal VOLNAME full
For example, to trigger self-heal on all the files on test-volume:# gluster volume heal test-volume full Heal operation on volume test-volume has been successful
- To view the list of files on a volume that are in a split-brain state:
# gluster volume heal VOLNAME info split-brain
For example, to view the list of files on test-volume that are in a split-brain state:# gluster volume heal test-volume info split-brain Brick server1:/gfs/test-volume_2 Number of entries: 12 at path on brick ---------------------------------- 2012-06-13 04:02:05 /dir/file.83 2012-06-13 04:02:05 /dir/file.28 2012-06-13 04:02:05 /dir/file.69 Brick server2:/gfs/test-volume_2 Number of entries: 12 at path on brick ---------------------------------- 2012-06-13 04:02:05 /dir/file.83 2012-06-13 04:02:05 /dir/file.28 2012-06-13 04:02:05 /dir/file.69 ...