31.3. Network Partition Recovery Examples
- A distributed four node cluster with
numOwners
set to 3 at Section 31.3.1, “Distributed 4-Node Cache Example With 3 NumOwners” - A distributed four node cluster with
numOwners
set to 2 at Section 31.3.2, “Distributed 4-Node Cache Example With 2 NumOwners” - A distributed five node cluster with
numOwners
set to 3 at Section 31.3.3, “Distributed 5-Node Cache Example With 3 NumOwners” - A replicated four node cluster with
numOwners
set to 4 at Section 31.3.4, “Replicated 4-Node Cache Example With 4 NumOwners” - A replicated five node cluster with
numOwners
set to 5 at Section 31.3.5, “Replicated 5-Node Cache Example With 5 NumOwners” - A replicated eight node cluster with
numOwners
set to 8 at Section 31.3.6, “Replicated 8-Node Cache Example With 8 NumOwners”
31.3.1. Distributed 4-Node Cache Example With 3 NumOwners
k1
, k2
, k3
, and k4
). For this cache, numOwners
equals 3, which means that each data entry must have three copies on various nodes in the cache.

Figure 31.1. Cache Before and After a Network Partition
numOwners
) nodes left from the last stable view. As a result, none of the four entries (k1
, k2
, k3
, and k4
) are available for reads or writes. No new entries can be written in either degraded partition, as neither partition can store 3 copies of an entry.

Figure 31.2. Cache After Partitions Are Merged
k1
, k2
, k3
, and k4
).
31.3.2. Distributed 4-Node Cache Example With 2 NumOwners
numOwners
equals 2, so the four data entries (k1
, k2
, k3
and k4
) have two copies each in the cache.

Figure 31.3. Cache Before and After a Network Partition
k1
is available for reads and writes because numOwners
equals 2 and both copies of the entry remain in Partition 1. In Partition 2, k4
is available for reads and writes for the same reason. The entries k2
and k3
become unavailable in both partitions, as neither partition contains all copies of these entries. A new entry k5
can be written to a partition only if that partition were to own both copies of k5
.

Figure 31.4. Cache After Partitions Are Merged
k1
, k2
, k3
and k4
).
31.3.3. Distributed 5-Node Cache Example With 3 NumOwners
numOwners
equal to 3.

Figure 31.5. Cache Before and After a Network Partition
numOwners
nodes.

Figure 31.6. Partition 1 Rebalances and Another Entry is Added
numOwners
equals 3) in the cache. As a result, each of the three nodes contains a copy of every entry in the cache. Next, we add a new entry, k6
, to the cache. Since the numOwners
value is still 3, and there are three nodes in Partition 1, each node includes a copy of k6
.

Figure 31.7. Cache After Partitions Are Merged
numOwners
=3), JBoss Data Grid rebalances the nodes so that the data entries are distributed between the four nodes in the cache. The new combined cache becomes fully available.
31.3.4. Replicated 4-Node Cache Example With 4 NumOwners
numOwners
equal to 4.

Figure 31.8. Cache Before and After a Network Partition
k1
, k2
, k3
, and k4
are unavailable for reads and writes because neither of the two partitions owns all copies of any of the four keys.

Figure 31.9. Cache After Partitions Are Merged
k1
, k2
, k3
, and k4
).
31.3.5. Replicated 5-Node Cache Example With 5 NumOwners
numOwners
equal to 5.

Figure 31.10. Cache Before and After a Network Partition

Figure 31.11. Both Partitions Are Merged Into One Cache
31.3.6. Replicated 8-Node Cache Example With 8 NumOwners
numOwners
equal to 8.

Figure 31.12. Cache Before and After a Network Partition

Figure 31.13. Partition 2 Further Splits into Partitions 2A and 2B
There are four potential resolutions for the caches from this scenario:
- Case 1: Partitions 2A and 2B Merge
- Case 2: Partition 1 and 2A Merge
- Case 3: Partition 1 and 2B Merge
- Case 4: Partition 1, Partition 2A, and Partition 2B Merge Together

Figure 31.14. Case 1: Partitions 2A and 2B Merge

Figure 31.15. Case 2: Partition 1 and 2A Merge

Figure 31.16. Case 3: Partition 1 and 2B Merge

Figure 31.17. Case 4: Partition 1, Partition 2A, and Partition 2B Merge Together