tahoe-lafs/roadmap.txt
Zooko O'Whielacronx 79b5e50c26 add Operations/Deployment for first open source release
As per discussion I had with Peter this morning, we should do these three things and then release the first open source version.
2007-04-03 10:32:47 -07:00

82 lines
2.6 KiB
Plaintext

['*' means complete]
Connection Management:
*v1: foolscap, no relay, live == connected-to-queen, broadcast updates, full mesh
v2: live != connected-to-queen, connect on demand
v3: relay?
File Encoding:
*v1: single-segment, no merkle trees
*v2: multiple-segment (LFE)
v3: merkle tree to verify each share
v4: merkle tree to verify each segment
Share Encoding:
*v1: fake it (replication)
*v2: PyRS
*v2.5: ICodec-based codecs, but still using replication
*v3: C-based Reed-Solomon
Peer Selection:
*v1: permuted peer list, consistent hash
v2: reliability/goodness-point counting?
v3: denver airport (chord)?
Filetable Maintenance:
*v1: queen-based tree of MutableDirectoryNodes, persisted to queen's disk
no accounts
v2: move tree to client side, serialize to a file, upload,
queen.set_filetable_uri (still no accounts, just one global tree)
v3: break world up into accounts, separate mutable spaces. Maybe
implement SSKs
v4: filetree
Checker/Repairer:
*v1: none
v2: centralized checker, repair agent
v3: nodes also check their own files
Storage:
*v1: no deletion, one directory per verifierid, one owner per share,
leases never expire
*v2: multiple shares per verifierid [zooko]
v2.5: deletion
v2: leases expire, delete expired data on demand, multiple owners per share
UI:
*v1: readonly webish (nevow, URLs are filepaths)
*v2: read/write webish, mkdir, del (files)
v2.5: del (directories)
v3: PB+CLI tool
v4: FUSE
Operations/Deployment:
v1: doc how to set up a network (introducer, vdriver server, nodes)
v2: Windows port and testing
v3: Trac instance
back pocket ideas:
when nodes are unable to reach storage servers, make a note of it, inform
queen eventually. queen then puts server under observation or otherwise
looks for differences between their self-reported availability and the
experiences of others
store filetable URI in the first 10 peers that appear after your own nodeid
each entry has a sequence number, maybe a timestamp
on recovery, find the newest
multiple categories of leases:
committed leases -- we will not delete these in any case, but will instead tell an uploader that we are full
active leases
in-progress leases (partially filled, not closed, pb connection is currently open)
uncommitted leases -- we will delete these in order to make room for new lease requests
interrupted leases (partially filled, not closed, pb connection is currently not open, but they might come back)
expired leases
(I'm not sure about the precedence of these last two. Probably deleting expired leases instead of deleting interrupted leases would be okay.)
big questions:
convergence?
peer list maintenance: lots of entries