mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2024-12-24 07:06:41 +00:00
Update 'docs/architecture.txt' to reflect readonly share discovery
This commit is contained in:
parent
98325b40ee
commit
1338318644
@ -146,14 +146,20 @@ In the current version, the storage index is used to consistently-permute the
|
|||||||
set of all peer nodes (by sorting the peer nodes by
|
set of all peer nodes (by sorting the peer nodes by
|
||||||
HASH(storage_index+peerid)). Each file gets a different permutation, which
|
HASH(storage_index+peerid)). Each file gets a different permutation, which
|
||||||
(on average) will evenly distribute shares among the grid and avoid hotspots.
|
(on average) will evenly distribute shares among the grid and avoid hotspots.
|
||||||
|
We first remove any peer nodes that cannot hold an encoded share for our file,
|
||||||
|
and then ask some of the peers that we have removed if they are already
|
||||||
|
holding encoded shares for our file; we use this information later. This step
|
||||||
|
helps conserve space, time, and bandwidth by making the upload process less
|
||||||
|
likely to upload encoded shares that already exist.
|
||||||
|
|
||||||
We use this permuted list of nodes to ask each node, in turn, if it will hold
|
We then use this permuted list of nodes to ask each node, in turn, if it will
|
||||||
a share for us, by sending an 'allocate_buckets() query' to each one. Some
|
hold a share for us, by sending an 'allocate_buckets() query' to each one.
|
||||||
will say yes, others (those who are full) will say no: when a node refuses
|
Some will say yes, others (those who have become full since the start of peer
|
||||||
our request, we just take that share to the next node on the list. We keep
|
selection) will say no: when a node refuses our request, we just take that
|
||||||
going until we run out of shares to place. At the end of the process, we'll
|
share to the next node on the list. We keep going until we run out of shares
|
||||||
have a table that maps each share number to a node, and then we can begin the
|
to place. At the end of the process, we'll have a table that maps each share
|
||||||
encode+push phase, using the table to decide where each share should be sent.
|
number to a node, and then we can begin the encode+push phase, using the table
|
||||||
|
to decide where each share should be sent.
|
||||||
|
|
||||||
Most of the time, this will result in one share per node, which gives us
|
Most of the time, this will result in one share per node, which gives us
|
||||||
maximum reliability (since it disperses the failures as widely as possible).
|
maximum reliability (since it disperses the failures as widely as possible).
|
||||||
|
Loading…
Reference in New Issue
Block a user