tahoe-lafs/roadmap.txt

78 lines
2.6 KiB
Plaintext
Raw Normal View History

2006-12-01 00:16:19 +00:00
2006-12-04 02:55:05 +00:00
['*' means complete]
2006-12-01 00:16:19 +00:00
Connection Management: Brian
*v1: foolscap, no relay, live == connected-to-queen, broadcast updates, full mesh
2006-12-01 00:16:19 +00:00
v2: live != connected-to-queen, connect on demand
v3: relay?
2007-01-19 09:17:24 +00:00
File Encoding: Brian
*v1: single-segment, no merkle trees
2007-04-03 17:14:34 +00:00
*v2: multiple-segment (LFE)
2006-12-01 00:16:19 +00:00
v3: merkle tree to verify each share
v4: merkle tree to verify each segment
2007-01-19 09:17:24 +00:00
Share Encoding: Zooko
*v1: fake it (replication)
*v2: PyRS
2007-03-28 07:05:16 +00:00
*v2.5: ICodec-based codecs, but still using replication
*v3: C-based Reed-Solomon
2007-01-19 09:17:24 +00:00
2006-12-01 00:16:19 +00:00
Peer selection:
2006-12-04 02:51:33 +00:00
*v1: permuted peer list, consistent hash
2007-01-19 09:17:24 +00:00
v2: reliability/goodness-point counting?
v3: denver airport (chord)?
2006-12-01 00:16:19 +00:00
filetable maintenance:
2006-12-04 11:43:33 +00:00
*v1: queen-based tree of MutableDirectoryNodes, persisted to queen's disk
no accounts
v2: move tree to client side, serialize to a file, upload,
queen.set_filetable_uri (still no accounts, just one global tree)
v3: break world up into accounts, separate mutable spaces. Maybe
implement SSKs
2007-01-19 09:17:24 +00:00
v4: filetree
2006-12-01 00:16:19 +00:00
checker/repairer:
2006-12-04 11:43:33 +00:00
*v1: none
2006-12-01 00:16:19 +00:00
v2: centralized checker, repair agent
v3: nodes also check their own files
storage: RobK
2006-12-04 02:51:33 +00:00
*v1: no deletion, one directory per verifierid, one owner per share,
2006-12-01 00:16:19 +00:00
leases never expire
2006-12-04 11:43:33 +00:00
v2: multiple shares per verifierid [zooko]
2006-12-05 02:56:22 +00:00
v2.5: deletion
2006-12-01 00:16:19 +00:00
v2: leases expire, delete expired data on demand, multiple owners per share
2006-12-04 11:43:33 +00:00
UI: (brian)
webish? webfront? PB + CLI tool? FUSE?
2006-12-04 11:43:33 +00:00
*v1: readonly webish (nevow, URLs are filepaths)
2006-12-05 02:56:22 +00:00
*v2: read/write webish, mkdir, del (files)
v2.5: del (directories)
2006-12-04 11:43:33 +00:00
v3: PB+CLI tool
v4: FUSE
2006-12-01 00:16:19 +00:00
back pocket ideas:
when nodes are unable to reach storage servers, make a note of it, inform
queen eventually. queen then puts server under observation or otherwise
looks for differences between their self-reported availability and the
experiences of others
store filetable URI in the first 10 peers that appear after your own nodeid
each entry has a sequence number, maybe a timestamp
on recovery, find the newest
multiple categories of leases:
committed leases -- we will not delete these in any case, but will instead tell an uploader that we are full
active leases
in-progress leases (partially filled, not closed, pb connection is currently open)
uncommitted leases -- we will delete these in order to make room for new lease requests
interrupted leases (partially filled, not closed, pb connection is currently not open, but they might come back)
expired leases
(I'm not sure about the precedence of these last two. Probably deleting expired leases instead of deleting interrupted leases would be okay.)
2006-12-04 02:55:05 +00:00
2006-12-01 00:16:19 +00:00
big questions:
convergence?
peer list maintenance: lots of entries
2006-12-04 02:55:05 +00:00