mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2024-12-19 13:07:56 +00:00
105 lines
3.4 KiB
Plaintext
105 lines
3.4 KiB
Plaintext
|
|
['*' means complete]
|
|
|
|
Connection Management:
|
|
*v1: foolscap, no relay, live=connected-to-queen, broadcast updates, full mesh
|
|
v2: live != connected-to-queen, connect on demand
|
|
v3: relay?
|
|
|
|
File Encoding:
|
|
*v1: single-segment, no merkle trees
|
|
*v2: multiple-segment (LFE)
|
|
v3: merkle tree to verify each share
|
|
v4: merkle tree to verify each segment
|
|
|
|
Share Encoding:
|
|
*v1: fake it (replication)
|
|
*v2: PyRS
|
|
*v2.5: ICodec-based codecs, but still using replication
|
|
*v3: C-based Reed-Solomon
|
|
|
|
URI:
|
|
*v1: really big
|
|
v2: derive more information from version and filesize, to remove codec_name,
|
|
codec_params, tail_codec_params, needed_shares, total_shares, segment_size
|
|
|
|
Upload Peer Selection:
|
|
*v1: permuted peer list, consistent hash
|
|
*v2: permute peers by verifierid and arrange around ring, intermixed with
|
|
shareids on the same range, each share goes to the
|
|
next-clockwise-available peer
|
|
v3: reliability/goodness-point counting?
|
|
v4: denver airport (chord)?
|
|
|
|
Download Peer Selection:
|
|
*v1: ask all peers
|
|
v2: permute peers and shareids as in upload, ask next-clockwise peers first
|
|
(the "A" list), if necessary ask the ones after them, etc.
|
|
v3: denver airport?
|
|
|
|
Filetable Maintenance:
|
|
*v1: queen-based tree of MutableDirectoryNodes, persisted to queen's disk
|
|
no accounts
|
|
v2: move tree to client side, serialize to a file, upload,
|
|
queen.set_filetable_uri (still no accounts, just one global tree)
|
|
v3: break world up into accounts, separate mutable spaces. Maybe
|
|
implement SSKs
|
|
v4: filetree
|
|
|
|
Checker/Repairer:
|
|
*v1: none
|
|
v2: centralized checker, repair agent
|
|
v3: nodes also check their own files
|
|
|
|
Storage:
|
|
*v1: no deletion, one directory per verifierid, one owner per share,
|
|
leases never expire
|
|
*v2: multiple shares per verifierid [zooko]
|
|
v3: deletion
|
|
v4: leases expire, delete expired data on demand, multiple owners per share
|
|
|
|
UI:
|
|
*v1: readonly webish (nevow, URLs are filepaths)
|
|
*v2: read/write webish, mkdir, del (files)
|
|
v2.5: del (directories)
|
|
v3: PB+CLI tool.
|
|
v3.5: XUIL?
|
|
v4: FUSE
|
|
|
|
Operations/Deployment/Doc/Free Software/Community:
|
|
* Windows port and testing (iputil)
|
|
* Trac instance
|
|
* extirpate all references to "queen" and "metatracker"
|
|
* set up public trac, buildbot reports, mailing list, download page
|
|
* write announcement
|
|
|
|
back pocket ideas:
|
|
when nodes are unable to reach storage servers, make a note of it, inform
|
|
queen eventually. queen then puts server under observation or otherwise
|
|
looks for differences between their self-reported availability and the
|
|
experiences of others
|
|
|
|
store filetable URI in the first 10 peers that appear after your own nodeid
|
|
each entry has a sequence number, maybe a timestamp
|
|
on recovery, find the newest
|
|
|
|
multiple categories of leases:
|
|
1: committed leases -- we will not delete these in any case, but will instead
|
|
tell an uploader that we are full
|
|
1a: active leases
|
|
1b: in-progress leases (partially filled, not closed, pb connection is
|
|
currently open)
|
|
2: uncommitted leases -- we will delete these in order to make room for new
|
|
lease requests
|
|
2a: interrupted leases (partially filled, not closed, pb connection is
|
|
currently not open, but they might come back)
|
|
2b: expired leases
|
|
|
|
(I'm not sure about the precedence of these last two. Probably deleting
|
|
expired leases instead of deleting interrupted leases would be okay.)
|
|
|
|
big questions:
|
|
convergence?
|
|
peer list maintenance: lots of entries
|
|
|