mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2024-12-19 13:07:56 +00:00
d19d1058e0
This is a potentially disruptive and potentially ugly change to the code base, because I renamed the object that serves in both roles from "Queen" to "IntroducerAndVdrive", which is a bit of an ugly name. However, I think that clarity is important enough in this release to make this change. All unit tests pass. I'm now darcs recording this patch in order to pull it to other machines for more testing.
102 lines
3.3 KiB
Plaintext
102 lines
3.3 KiB
Plaintext
|
|
['*' means complete]
|
|
|
|
Connection Management:
|
|
*v1: foolscap, no relay, live=connected-to-introducer, broadcast updates, fully connected topology
|
|
v2: live != connected-to-introducer, connect on demand
|
|
v3: relay?
|
|
|
|
File Encoding:
|
|
*v1: single-segment, no merkle trees
|
|
*v2: multiple-segment (LFE)
|
|
*v3: merkle tree to verify each share
|
|
*v4: merkle tree to verify each segment
|
|
v5: only retrieve the minimal number of hashes instead of all of them
|
|
|
|
Share Encoding:
|
|
*v1: fake it (replication)
|
|
*v2: PyRS
|
|
*v2.5: ICodec-based codecs, but still using replication
|
|
*v3: C-based Reed-Solomon
|
|
|
|
URI:
|
|
*v1: really big
|
|
v2: derive more information from version and filesize, to remove codec_name,
|
|
codec_params, tail_codec_params, needed_shares, total_shares, segment_size
|
|
|
|
Upload Peer Selection:
|
|
*v1: permuted peer list, consistent hash
|
|
*v2: permute peers by verifierid and arrange around ring, intermixed with
|
|
shareids on the same range, each share goes to the
|
|
next-clockwise-available peer
|
|
v3: reliability/goodness-point counting?
|
|
v4: denver airport (chord)?
|
|
|
|
Download Peer Selection:
|
|
*v1: ask all peers
|
|
v2: permute peers and shareids as in upload, ask next-clockwise peers first
|
|
(the "A" list), if necessary ask the ones after them, etc.
|
|
v3: denver airport?
|
|
|
|
Filetable Maintenance:
|
|
*v1: vdrive-based tree of MutableDirectoryNodes, persisted to vdrive's disk
|
|
no accounts
|
|
v2: move tree to client side, serialize to a file, upload,
|
|
vdrive.set_filetable_uri (still no accounts, just one global tree)
|
|
v3: break world up into accounts, separate mutable spaces. Maybe
|
|
implement SSKs
|
|
v4: filetree
|
|
|
|
Checker/Repairer:
|
|
*v1: none
|
|
v2: centralized checker, repair agent
|
|
v3: nodes also check their own files
|
|
|
|
Storage:
|
|
*v1: no deletion, one directory per verifierid, no owners of shares,
|
|
leases never expire
|
|
*v2: multiple shares per verifierid [zooko]
|
|
v3: deletion
|
|
v4: leases expire, delete expired data on demand, multiple owners per share
|
|
|
|
UI:
|
|
*v1: readonly webish (nevow, URLs are filepaths)
|
|
*v2: read/write webish, mkdir, del (files)
|
|
v2.5: del (directories)
|
|
v3: PB+CLI tool.
|
|
v3.5: XUIL?
|
|
v4: FUSE
|
|
|
|
Operations/Deployment/Doc/Free Software/Community:
|
|
- move this file into the wiki ?
|
|
|
|
back pocket ideas:
|
|
when nodes are unable to reach storage servers, make a note of it, inform
|
|
verifier/checker eventually. verifier/checker then puts server under
|
|
observation or otherwise looks for differences between their self-reported
|
|
availability and the experiences of others
|
|
|
|
store filetable URI in the first 10 peers that appear after your own nodeid
|
|
each entry has a sequence number, maybe a timestamp
|
|
on recovery, find the newest
|
|
|
|
multiple categories of leases:
|
|
1: committed leases -- we will not delete these in any case, but will instead
|
|
tell an uploader that we are full
|
|
1a: active leases
|
|
1b: in-progress leases (partially filled, not closed, pb connection is
|
|
currently open)
|
|
2: uncommitted leases -- we will delete these in order to make room for new
|
|
lease requests
|
|
2a: interrupted leases (partially filled, not closed, pb connection is
|
|
currently not open, but they might come back)
|
|
2b: expired leases
|
|
|
|
(I'm not sure about the precedence of these last two. Probably deleting
|
|
expired leases instead of deleting interrupted leases would be okay.)
|
|
|
|
big questions:
|
|
convergence?
|
|
peer list maintenance: lots of entries
|
|
|