tahoe-lafs/roadmap.txt

67 lines
2.3 KiB
Plaintext
Raw Normal View History

2006-12-01 00:16:19 +00:00
2006-12-04 02:55:05 +00:00
['*' means complete]
2006-12-01 00:16:19 +00:00
Connection Management: Brian
*v1: foolscap, no relay, live == connected-to-queen, broadcast updates, full mesh
2006-12-01 00:16:19 +00:00
v2: live != connected-to-queen, connect on demand
v3: relay?
Encoding: Zooko
2006-12-04 02:51:33 +00:00
*v1: fake it (replication), no merkle trees
2006-12-04 11:43:33 +00:00
v2: mnet codec, Reed-Solomon?
2006-12-01 00:16:19 +00:00
v3: merkle tree to verify each share
v4: merkle tree to verify each segment
Peer selection:
2006-12-04 02:51:33 +00:00
*v1: permuted peer list, consistent hash
2006-12-01 00:16:19 +00:00
filetable maintenance:
2006-12-04 11:43:33 +00:00
*v1: queen-based tree of MutableDirectoryNodes, persisted to queen's disk
no accounts
v2: move tree to client side, serialize to a file, upload,
queen.set_filetable_uri (still no accounts, just one global tree)
v3: break world up into accounts, separate mutable spaces. Maybe
implement SSKs
2006-12-01 00:16:19 +00:00
checker/repairer:
2006-12-04 11:43:33 +00:00
*v1: none
2006-12-01 00:16:19 +00:00
v2: centralized checker, repair agent
v3: nodes also check their own files
storage: RobK
2006-12-04 02:51:33 +00:00
*v1: no deletion, one directory per verifierid, one owner per share,
2006-12-01 00:16:19 +00:00
leases never expire
2006-12-04 11:43:33 +00:00
v2: multiple shares per verifierid [zooko]
2006-12-01 00:16:19 +00:00
v2: leases expire, delete expired data on demand, multiple owners per share
2006-12-04 11:43:33 +00:00
UI: (brian)
webish? webfront? PB + CLI tool? FUSE?
2006-12-04 11:43:33 +00:00
*v1: readonly webish (nevow, URLs are filepaths)
v2: read/write webish
v3: PB+CLI tool
v4: FUSE
2006-12-01 00:16:19 +00:00
back pocket ideas:
when nodes are unable to reach storage servers, make a note of it, inform
queen eventually. queen then puts server under observation or otherwise
looks for differences between their self-reported availability and the
experiences of others
store filetable URI in the first 10 peers that appear after your own nodeid
each entry has a sequence number, maybe a timestamp
on recovery, find the newest
multiple categories of leases:
committed leases -- we will not delete these in any case, but will instead tell an uploader that we are full
active leases
in-progress leases (partially filled, not closed, pb connection is currently open)
uncommitted leases -- we will delete these in order to make room for new lease requests
interrupted leases (partially filled, not closed, pb connection is currently not open, but they might come back)
expired leases
(I'm not sure about the precedence of these last two. Probably deleting expired leases instead of deleting interrupted leases would be okay.)
2006-12-04 02:55:05 +00:00
2006-12-01 00:16:19 +00:00
big questions:
convergence?
peer list maintenance: lots of entries
2006-12-04 02:55:05 +00:00