docs: updates to relnotes.txt, NEWS, architecture, historical_known_issues, install.html, etc.

This commit is contained in:
Zooko O'Whielacronx 2010-02-01 10:18:09 -08:00
parent 3e4342ecb3
commit 57e3af1447
9 changed files with 227 additions and 222 deletions

101
NEWS
View File

@ -1,14 +1,14 @@
User visible changes in Tahoe-LAFS. -*- outline -*-
* Release ?.?.? (?)
* Release 1.6.0 (2010-02-01)
** New Features
*** Immutable Directories
Tahoe can now create and handle immutable directories. These are read just
like normal directories, but are "deep-immutable", meaning that all their
children (and everything reachable from those children) must be immutable
Tahoe-LAFS can now create and handle immutable directories (#XXX). These are
read just like normal directories, but are "deep-immutable", meaning that all
their children (and everything reachable from those children) must be immutable
objects (i.e. immutable/literal files, and other immutable directories).
These directories must be created in a single webapi call, which provides all
@ -18,10 +18,10 @@ they cannot be changed after creation). They have URIs that start with
interface (aka the "WUI") with a "DIR-IMM" abbreviation (as opposed to "DIR"
for the usual read-write directories and "DIR-RO" for read-only directories).
Tahoe releases before 1.6.0 cannot read the contents of an immutable
Tahoe-LAFS releases before 1.6.0 cannot read the contents of an immutable
directory. 1.5.0 will tolerate their presence in a directory listing (and
display it as an "unknown node"). 1.4.1 and earlier cannot tolerate them: a
DIR-IMM child in any directory will prevent the listing of that directory.
display it as "unknown"). 1.4.1 and earlier cannot tolerate them: a DIR-IMM
child in any directory will prevent the listing of that directory.
Immutable directories are repairable, just like normal immutable files.
@ -31,20 +31,20 @@ directories. See docs/frontends/webapi.txt for details.
*** "tahoe backup" now creates immutable directories, backupdb has dircache
The "tahoe backup" command has been enhanced to create immutable directories
(in previous releases, it created read-only mutable directories). This is
significantly faster, since it does not need to create an RSA keypair for
each new directory. Also "DIR-IMM" immutable directories are repairable,
unlike "DIR-RO" read-only mutable directories (at least in this release: a
future Tahoe release should be able to repair DIR-RO).
(in previous releases, it created read-only mutable directories) (#XXX). This
is significantly faster, since it does not need to create an RSA keypair for
each new directory. Also "DIR-IMM" immutable directories are repairable, unlike
"DIR-RO" read-only mutable directories (at least in this release: a future
Tahoe-LAFS release should be able to repair DIR-RO).
In addition, the backupdb (used by "tahoe backup" to remember what it has
already copied) has been enhanced to store information about existing
immutable directories. This allows it to re-use directories that have moved
but still contain identical contents, or which have been deleted and later
replaced. (the 1.5.0 "tahoe backup" command could only re-use directories
that were in the same place as they were in the immediately previous backup).
With this change, the backup process no longer needs to read the previous
snapshot out of the Tahoe grid, reducing the network load considerably.
already copied) has been enhanced to store information about existing immutable
directories. This allows it to re-use directories that have moved but still
contain identical contents, or which have been deleted and later replaced. (the
1.5.0 "tahoe backup" command could only re-use directories that were in the
same place as they were in the immediately previous backup). With this change,
the backup process no longer needs to read the previous snapshot out of the
Tahoe-LAFS grid, reducing the network load considerably.
A "null backup" (in which nothing has changed since the previous backup) will
require only two Tahoe-side operations: one to add an Archives/$TIMESTAMP
@ -59,7 +59,7 @@ had to be uploaded too): it will require time proportional to the number and
size of your directories. After this initial pass, all subsequent passes
should take a tiny fraction of the time.
As noted above, Tahoe versions earlier than 1.5.0 cannot read immutable
As noted above, Tahoe-LAFS versions earlier than 1.5.0 cannot read immutable
directories.
The "tahoe backup" command has been improved to skip over unreadable objects
@ -67,36 +67,58 @@ The "tahoe backup" command has been improved to skip over unreadable objects
command from reading their contents), instead of throwing an exception and
terminating the backup process. It also skips over symlinks, because these
cannot be represented faithfully in the Tahoe-side filesystem. A warning
message will be emitted each time something is skipped. (#729, #850, #641)
message will be emitted each time something is skipped. (#729, #850, #641) XXX
*** "create-node" command added, "create-client" now implies --no-storage
The basic idea behind Tahoe's client+server and client-only processes is that
you are creating a general-purpose Tahoe "node" process, which has several
components activated (or not). Storage service is one of these optional
components, as is the Helper, FTP server, and SFTP server. (Client/webapi
The basic idea behind Tahoe-LAFS's client+server and client-only processes is
that you are creating a general-purpose Tahoe-LAFS "node" process, which has
several components that can be activated. Storage service is one of these
optional components, as is the Helper, FTP server, and SFTP server. Web gateway
functionality is nominally on this list, but it is always active: a future
release will make it optional). The special-purpose servers remain separate
(introducer, key-generator, stats-gatherer).
release will make it optional. There are three special purpose servers that
can't currently be run as a component in a node: introducer, key-generator,
stats-gatherer.
So now "tahoe create-node" will create a Tahoe node process, and after
So now "tahoe create-node" will create a Tahoe-LAFS node process, and after
creation you can edit its tahoe.cfg to enable or disable the desired
services. It is a more general-purpose replacement for "tahoe create-client".
The default configuration has storage service enabled. For convenience, the
"--no-storage" argument makes a tahoe.cfg file that disables storage service.
"--no-storage" argument makes a tahoe.cfg file that disables storage
service. (#XXX)
"tahoe create-client" has been changed to create a Tahoe node without a
"tahoe create-client" has been changed to create a Tahoe-LAFS node without a
storage service. It is equivalent to "tahoe create-node --no-storage". This
helps to reduce the confusion surrounding the use of a command with "client"
in its name to create a storage *server*. Use "tahoe create-client" to create
a purely client-side node. If you want to offer storage to the grid, use
"tahoe create-node" instead.
helps to reduce the confusion surrounding the use of a command with "client" in
its name to create a storage *server*. Use "tahoe create-client" to create a
purely client-side node. If you want to offer storage to the grid, use "tahoe
create-node" instead.
In the future, other services will be added to the node, and they will be
controlled through options in tahoe.cfg . The most important of these
services may get additional --enable-XYZ or --disable-XYZ arguments to "tahoe
controlled through options in tahoe.cfg . The most important of these services
may get additional --enable-XYZ or --disable-XYZ arguments to "tahoe
create-node".
** Performance Improvements
Download of immutable files begins as soon as the downloader has located the K
necessary shares (#XXX). In both the previous and current releases, a
downloader will first issue queries to all storage servers on the grid to
locate shares before it begins downloading the shares. In previous releases of
Tahoe-LAFS, download would not begin until all storage servers on the grid had
replied to the query, at which point K shares would be chosen for download from
among the shares that were located. In this release, download begins as soon as
any K shares are located. This means that downloads start sooner, which is
particularly important if there is a server on the grid that is extremely slow
or even hung in such a way that it will never respond. In previous releases
such a server would have a negative impact on all downloads from that grid. In
this release, such a server will have no impact on downloads (as long as K
shares can be found on other, quicker, servers.) This also means that
downloads now use the "best-alacrity" servers that they talk to, as measured by
how quickly the servers reply to the initial query. This might cause downloads
to go faster, especially on grids with heterogeneous servers or geographical
dispersion.
** Minor Changes
The webapi acquired a new "t=mkdir-with-children" command, to create and
@ -127,10 +149,9 @@ target filename, such as when you copy from a bare filecap. (#761)
halting traversal. (#874, #786)
Many small packaging improvements were made to facilitate the "tahoe-lafs"
package being added to Ubuntu's "Karmic Koala" 9.10 release. Several
mac/win32 binary libraries were removed, some figleaf code-coverage files
were removed, a bundled copy of darcsver-1.2.1 was removed, and additional
licensing text was added.
package being included in Ubuntu. Several mac/win32 binary libraries were
removed, some figleaf code-coverage files were removed, a bundled copy of
darcsver-1.2.1 was removed, and additional licensing text was added.
Several DeprecationWarnings for python2.6 were silenced. (#859)

View File

@ -5,123 +5,104 @@
OVERVIEW
At a high-level this system consists of three layers: the key-value store,
the filesystem, and the application.
There are three layers: the key-value store, the filesystem, and the
application.
The lowest layer is the key-value store, which is a distributed hashtable
mapping from capabilities to data. The capabilities are relatively short
ASCII strings, each used as a reference to an arbitrary-length sequence of
data bytes, and are like a URI for that data. This data is encrypted and
distributed across a number of nodes, such that it will survive the loss of
most of the nodes.
The lowest layer is the key-value store. The keys are "capabilities" -- short
ascii strings -- and the values are sequences of data bytes. This data is
encrypted and distributed across a number of nodes, such that it will survive
the loss of most of the nodes. There are no hard limits on the size of the
values, but there may be performance issues with extremely large values (just
due to the limitation of network bandwidth). In practice, values as small as a
few bytes and as large as tens of gigabytes are in common use.
The middle layer is the decentralized filesystem: a directed graph in which
the intermediate nodes are directories and the leaf nodes are files. The leaf
nodes contain only the file data -- they contain no metadata about the file
other than the length in bytes. The edges leading to leaf nodes have metadata
attached to them about the file they point to. Therefore, the same file may
be associated with different metadata if it is dereferenced through different
edges.
nodes contain only the data -- they contain no metadata other than the length
in bytes. The edges leading to leaf nodes have metadata attached to them
about the file they point to. Therefore, the same file may be associated with
different metadata if it is referred to through different edges.
The top layer consists of the applications using the filesystem.
Allmydata.com uses it for a backup service: the application periodically
copies files from the local disk onto the decentralized filesystem. We later
provide read-only access to those files, allowing users to recover them. The
filesystem can be used by other applications, too.
provide read-only access to those files, allowing users to recover them.
There are several other applications built on top of the Tahoe-LAFS filesystem
(see the RelatedProjects page of the wiki for a list).
THE GRID OF STORAGE SERVERS
THE KEY-VALUE STORE
A key-value store is implemented by a collection of peer nodes -- processes
running on computers -- called a "grid". (The term "grid" is also used
loosely for the filesystem supported by these nodes.) The nodes in a grid
establish TCP connections to each other using Foolscap, a secure
remote-message-passing library.
The key-value store is implemented by a grid of Tahoe-LAFS storage servers --
user-space processes. Tahoe-LAFS storage clients communicate with the storage
servers over TCP.
Each node offers certain services to the others. The primary service is that
of the storage server, which holds data in the form of "shares". Shares are
encoded pieces of files. There are a configurable number of shares for each
file, 10 by default. Normally, each share is stored on a separate server, but
a single server can hold multiple shares for a single file.
Storage servers hold data in the form of "shares". Shares are encoded pieces
of files. There are a configurable number of shares for each file, 10 by
default. Normally, each share is stored on a separate server, but in some
cases a single server can hold multiple shares of a file.
Nodes learn about each other through an "introducer". Each node connects to a
central introducer at startup, and receives a list of all other nodes from
it. Each node then connects to all other nodes, creating a fully-connected
topology. In the current release, nodes behind NAT boxes will connect to all
nodes that they can open connections to, but they cannot open connections to
other nodes behind NAT boxes. Therefore, the more nodes behind NAT boxes, the
less the topology resembles the intended fully-connected topology.
Nodes learn about each other through an "introducer". Each server connects to
the introducer at startup and announces its presence. Each client connects to
the introducer at startup, and receives a list of all servers from it. Each
client then connects to every server, creating a "bi-clique" topology. In the
current release, nodes behind NAT boxes will connect to all nodes that they
can open connections to, but they cannot open connections to other nodes
behind NAT boxes. Therefore, the more nodes behind NAT boxes, the less the
topology resembles the intended bi-clique topology.
The introducer in nominally a single point of failure, in that clients who
never see the introducer will be unable to connect to any storage servers.
But once a client has been introduced to everybody, they do not need the
introducer again until they are restarted. The danger of a SPOF is further
reduced in other ways. First, the introducer is defined by a hostname and a
The introducer is a Single Point of Failure ("SPoF"), in that clients who
never connect to the introducer will be unable to connect to any storage
servers, but once a client has been introduced to everybody, it does not need
the introducer again until it is restarted. The danger of a SPoF is further
reduced in two ways. First, the introducer is defined by a hostname and a
private key, which are easy to move to a new host in case the original one
suffers an unrecoverable hardware problem. Second, even if the private key is
lost, clients can be reconfigured with a new introducer.furl that points to a
new one. Finally, we have plans to decentralize introduction, allowing any
node to tell a new client about all the others. With decentralized
"gossip-based" introduction, simply knowing how to contact any one node will
be enough to contact all of them.
lost, clients can be reconfigured to use a new introducer.
For future releases, we have plans to decentralize introduction, allowing any
server to tell a new client about all the others.
FILE ENCODING
When a node stores a file on its grid, it first encrypts the file, using a key
that is optionally derived from the hash of the file itself. It then segments
the encrypted file into small pieces, in order to reduce the memory footprint,
and to decrease the lag between initiating a download and receiving the first
part of the file; for example the lag between hitting "play" and a movie
actually starting.
When a client stores a file on the grid, it first encrypts the file. It then
breaks the encrypted file into small segments, in order to reduce the memory
footprint, and to decrease the lag between initiating a download and receiving
the first part of the file; for example the lag between hitting "play" and a
movie actually starting.
The node then erasure-codes each segment, producing blocks such that only a
subset of them are needed to reconstruct the segment. It sends one block from
each segment to a given server. The set of blocks on a given server
constitutes a "share". Only a subset of the shares (3 out of 10, by default)
are needed to reconstruct the file.
The client then erasure-codes each segment, producing blocks of which only a
subset are needed to reconstruct the segment (3 out of 10, with the default
settings).
A tagged hash of the encryption key is used to form the "storage index", which
is used for both server selection (described below) and to index shares within
the Storage Servers on the selected nodes.
It sends one block from each segment to a given server. The set of blocks on a
given server constitutes a "share". Therefore a subset f the shares (3 out of 10,
by default) are needed to reconstruct the file.
Hashes are computed while the shares are being produced, to validate the
ciphertext and the shares themselves. Merkle hash trees are used to enable
validation of individual segments of ciphertext without requiring the
download/decoding of the whole file. These hashes go into the "Capability
Extension Block", which will be stored with each share.
A hash of the encryption key is used to form the "storage index", which is used
for both server selection (described below) and to index shares within the
Storage Servers on the selected nodes.
The client computes secure hashes of the ciphertext and of the shares. It uses
Merkle Trees so that it is possible to verify the correctness of a subset of
the data without requiring all of the data. For example, this allows you to
verify the correctness of the first segment of a movie file and then begin
playing the movie file in your movie viewer before the entire movie file has
been downloaded.
These hashes are stored in a small datastructure named the Capability
Extension Block which is stored on the storage servers alongside each share.
The capability contains the encryption key, the hash of the Capability
Extension Block, and any encoding parameters necessary to perform the eventual
decoding process. For convenience, it also contains the size of the file
being stored.
On the download side, the node that wishes to turn a capability into a
sequence of bytes will obtain the necessary shares from remote nodes, break
them into blocks, use erasure-decoding to turn them into segments of
ciphertext, use the decryption key to convert that into plaintext, then emit
the plaintext bytes to the output target (which could be a file on disk, or it
could be streamed directly to a web browser or media player).
All hashes use SHA-256, and a different tag is used for each purpose.
Netstrings are used where necessary to insure these tags cannot be confused
with the data to be hashed. All encryption uses AES in CTR mode. The erasure
coding is performed with zfec.
A Merkle Hash Tree is used to validate the encoded blocks before they are fed
into the decode process, and a transverse tree is used to validate the shares
as they are retrieved. A third merkle tree is constructed over the plaintext
segments, and a fourth is constructed over the ciphertext segments. All
necessary hashes are stored with the shares, and the hash tree roots are put
in the Capability Extension Block. The final hash of the extension block goes
into the capability itself.
Note that the number of shares created is fixed at the time the file is
uploaded: it is not possible to create additional shares later. The use of a
top-level hash tree also requires that nodes create all shares at once, even
if they don't intend to upload some of them, otherwise the hashroot cannot be
calculated correctly.
To download, the client that wishes to turn a capability into a sequence of
bytes will obtain the blocks from storage servers, use erasure-decoding to
turn them into segments of ciphertext, use the decryption key to convert that
into plaintext, then emit the plaintext bytes to the output target.
CAPABILITIES
@ -148,11 +129,12 @@ The capability provides both "location" and "identification": you can use it
to retrieve a set of bytes, and then you can use it to validate ("identify")
that these potential bytes are indeed the ones that you were looking for.
The "key-value store" layer is insufficient to provide a usable filesystem,
which requires human-meaningful names. Capabilities sit on the
"global+secure" edge of Zooko's Triangle[1]. They are self-authenticating,
meaning that nobody can trick you into using a file that doesn't match the
capability you used to refer to that file.
The "key-value store" layer doesn't include human-meaningful
names. Capabilities sit on the "global+secure" edge of Zooko's
Triangle[1]. They are self-authenticating, meaning that nobody can trick you
into accepting a file that doesn't match the capability you used to refer to
that file. The filesystem layer (described below) adds human-meaningful names
atop the key-value layer.
SERVER SELECTION
@ -204,13 +186,15 @@ get back any 3 to recover the file. This results in a 3.3x expansion
factor. In general, you should set N about equal to the number of nodes in
your grid, then set N/k to achieve your desired availability goals.
When downloading a file, the current release just asks all known nodes for any
shares they might have, chooses the minimal necessary subset, then starts
downloading and processing those shares. A later release will use the full
algorithm to reduce the number of queries that must be sent out. This
algorithm uses the same consistent-hashing permutation as on upload, but stops
after it has located k shares (instead of all N). This reduces the number of
queries that must be sent before downloading can begin.
When downloading a file, the current version just asks all known servers for
any shares they might have. and then downloads the shares from the first servers that
chooses the minimal necessary subset, then starts
change downloading and processing those shares. A future release will use the
server selection algorithm to reduce the number of queries that must be sent
out. This algorithm uses the same consistent-hashing permutation as on upload,
but stops after it has located k shares (instead of all N). This reduces the
number of queries that must be sent before downloading can begin.
The actual number of queries is directly related to the availability of the
nodes and the degree of overlap between the node list used at upload and at

View File

@ -67,11 +67,8 @@ What sorts of machines are good candidates for running a helper?
To turn a Tahoe-LAFS node into a helper (i.e. to run a helper service in
addition to whatever else that node is doing), edit the tahoe.cfg file in your
node's base directory and set "enabled = true" in the section named
"[helper]". Then restart the node:
echo "yes" >$BASEDIR/run_helper
tahoe restart $BASEDIR
node's base directory and set "enabled = true" in the section named
"[helper]".
Then restart the node. This will signal the node to create a Helper service
and listen for incoming requests. Once the node has started, there will be a

View File

@ -5,15 +5,13 @@ manage them. The current version of this file can be found at
http://allmydata.org/source/tahoe/trunk/docs/historical/historical_known_issues.txt
Newer versions of this document describing issues in newer releases of
Tahoe-LAFS can be found at:
Issues in newer releases of Tahoe-LAFS can be found at:
http://allmydata.org/source/tahoe/trunk/docs/known_issues.txt
== issues in Tahoe v1.1.0, released 2008-06-11 ==
(Tahoe v1.1.0 was superceded by v1.2.0 which was released 2008-07-21,
and then by v1.3.0 which was released 2009-02-13.)
(Tahoe v1.1.0 was superceded by v1.2.0 which was released 2008-07-21.)
=== more than one file can match an immutable file cap ===

View File

@ -1,34 +1,34 @@
<!DOCtype HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html lang="en">
<head>
<title>Installing Tahoe</title>
<title>Installing Tahoe-LAFS</title>
<link rev="made" class="mailto" href="mailto:zooko[at]zooko[dot]com">
<meta name="description" content="how to install Tahoe">
<meta name="description" content="how to install Tahoe-LAFS">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="keywords" content="tahoe secure decentralized filesystem installation">
<meta name="keywords" content="tahoe-lafs secure decentralized filesystem installation">
</head>
<body>
<h1>About Tahoe</h1>
<p>Welcome to <a href="http://allmydata.org">the Tahoe project</a>, a secure, decentralized, fault-tolerant filesystem. <a href="about.html">About Tahoe.</a>
<h1>About Tahoe-LAFS</h1>
<p>Welcome to <a href="http://allmydata.org/trac/tahoe-lafs">the Tahoe-LAFS project</a>, a secure, decentralized, fault-tolerant filesystem. <a href="about.html">About Tahoe-LAFS.</a>
<h1>How To Install Tahoe</h1>
<h1>How To Install Tahoe-LAFS</h1>
<p>This procedure has been verified to work on Windows, Cygwin, Mac, Linux, Solaris, FreeBSD, OpenBSD, and NetBSD. It's likely to work on other platforms. If you have trouble with this install process, please write to <a href="http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev">the tahoe-dev mailing list</a>, where friendly hackers will help you out.</p>
<p>This procedure has been verified to work on Windows, Cygwin, Mac, many flavors of Linux, Solaris, FreeBSD, OpenBSD, and NetBSD. It's likely to work on other platforms. If you have trouble with this install process, please write to <a href="http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev">the tahoe-dev mailing list</a>, where friendly hackers will help you out.</p>
<h2>Install Python</h2>
<p>Check if you already have an adequate version of Python installed by running <cite>python -V</cite>. Python v2.4 (v2.4.2 or greater), Python v2.5 or Python v2.6 will work. Python v3 does not work. If you don't have one of these versions of Python installed, then follow the instructions on <a href="http://python.org/download/">the Python download page</a> to download and install Python v2.5.</p>
<p>Check if you already have an adequate version of Python installed by running <cite>python -V</cite>. Python v2.4 (v2.4.2 or greater), Python v2.5 or Python v2.6 will work. Python v3 does not work. If you don't have one of these versions of Python installed, then follow the instructions on <a href="http://python.org/download/">the Python download page</a> to download and install Python v2.6.</p>
<p>(If installing on Windows, you now need to manually install the <cite>pywin32</cite> package -- see "More Details" below.)</p>
<h2>Get Tahoe</h2>
<h2>Get Tahoe-LAFS</h2>
<p>Download the 1.5.0 release zip file:</p>
<p>Download the 1.6.0 release zip file:</p>
<pre><a
href="http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.5.0.zip">http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.5.0.zip</a></pre>
href="http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.6.0.zip">http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.6.0.zip</a></pre>
<h2>Build Tahoe</h2>
<h2>Build Tahoe-LAFS</h2>
<p>Unpack the zip file and cd into the top-level directory.</p>
@ -38,14 +38,14 @@
<p>Run <cite>bin/tahoe --version</cite> to verify that the executable tool prints out the right version number.</p>
<h2>Run</h2>
<h2>Run Tahoe-LAFS</h2>
<p>Now you have the Tahoe source code installed and are ready to use it to form a decentralized filesystem. The <cite>tahoe</cite> executable in the <cite>bin</cite> directory can configure and launch your Tahoe nodes. See <a href="running.html">running.html</a> for instructions on how to do that.</p>
<p>Now you have the Tahoe-LAFS source code installed and are ready to use it to form a decentralized filesystem. The <cite>tahoe</cite> executable in the <cite>bin</cite> directory can configure and launch your Tahoe-LAFS nodes. See <a href="running.html">running.html</a> for instructions on how to do that.</p>
<h2>More Details</h2>
<p>For more details, including platform-specific hints for Debian, Windows, and Mac systems, please see the <a href="http://allmydata.org/trac/tahoe/wiki/InstallDetails">InstallDetails</a> wiki page. If you are running on Windows, you need to manually install "pywin32", as described on that page. Debian/Ubuntu users: use the instructions written above! Do not try to install Tahoe-LAFS using apt-get.
<p>For more details, including platform-specific hints for Debian, Windows, and Mac systems, please see the <a href="http://allmydata.org/trac/tahoe/wiki/InstallDetails">InstallDetails</a> wiki page. If you are running on Windows, you need to manually install "pywin32", as described on that page.</p>
</body>
</html>

View File

@ -13,12 +13,6 @@ The foolscap distribution includes a utility named "flogtool" (usually at
/usr/bin/flogtool) which is used to get access to many foolscap logging
features.
Note that there are currently (in foolscap-0.3.2) a couple of problems in using
flogtool on Windows:
http://foolscap.lothar.com/trac/ticket/108 # set base to "." if not running from source
http://foolscap.lothar.com/trac/ticket/109 # make a "flogtool" executable that works on Windows
== Realtime Logging ==
When you are working on Tahoe code, and want to see what the node is doing,

View File

@ -1,15 +1,15 @@
<!DOCtype HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html lang="en">
<head>
<title>Running Tahoe</title>
<title>Running Tahoe-LAFS</title>
<link rev="made" class="mailto" href="mailto:zooko[at]zooko[dot]com">
<meta name="description" content="how to run Tahoe">
<meta name="description" content="how to run Tahoe-LAFS">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="keywords" content="tahoe secure decentralized filesystem operation">
<meta name="keywords" content="tahoe-lafs secure decentralized filesystem operation">
</head>
<body>
<h1>How To Start Tahoe</h1>
<h1>How To Start Tahoe-LAFS</h1>
<p>This is how to run a Tahoe client or a complete Tahoe grid. First you
have to install the Tahoe software, as documented in <a

View File

@ -1,28 +1,37 @@
ANNOUNCING the immediate availability of version 1.5 of Tahoe-LAFS.
ANNOUNCING v1.6 of Tahoe-LAFS
Tahoe-LAFS is the first cloud storage technology which offers security and
privacy in the sense that the cloud storage service provider itself can't read
or alter your data. Here is the one-page explanation of Tahoe's unique security
and fault-tolerance properties:
The Tahoe-LAFS team is pleased to announce the immediate
availability of version 1.6 of Tahoe-LAFS, an extremely
reliable distributed key-value store and cloud filesystem.
Tahoe-LAFS is the first cloud storage system which offers
"provider-independent security" -- meaning that not even your
cloud service provider can read or alter your data without your
consent. Here is the one-page explanation of its unique
security and fault-tolerance properties:
http://allmydata.org/source/tahoe/trunk/docs/about.html
This is the successor to v1.4.1, which was released April 13, 2009. This is a
major new release, improving the user interface, increasing performance, and
fixing a few bugs. See the release notes for details:
Tahoe-LAFS v1.6.0 is the successor to v1.5.0, which was
released August 1, 2009. In this major new release, we've added
deep-immutable directories (cryptographically unalterable
permanent snapshots), greatly increased performance for some
common operations, and improved the help text, documentation,
command-line options, and web user interface. The FUSE plugin
has been fixed. We also fixed a few bugs. See the release notes
for details:
http://allmydata.org/source/tahoe/trunk/relnotes.txt
In addition to the functionality of Tahoe-LAFS itself, a crop of related
projects have sprung up to extend it and to integrate it into other tools.
These include frontends for Windows, Macintosh, JavaScript, and iPhone, and
plugins for duplicity, bzr, Hadoop, and TiddlyWiki, and more. See the Related
Projects page on the wiki:
In addition to the core storage system itself, a crop of
related projects have sprung up to extend it and to integrate
it into operating systems and applications. These include
frontends for Windows, Macintosh, JavaScript, and iPhone, and
plugins for Hadoop, bzr, duplicity, TiddlyWiki, and more. See
the Related Projects page:
http://allmydata.org/trac/tahoe/wiki/RelatedProjects
Tahoe is the basis of the consumer backup product from Allmydata, Inc. --
http://allmydata.com .
We believe that erasure coding, strong encryption, Free/Open Source Software
and good engineering make Tahoe-LAFS safer than other storage technologies.
We believe that erasure coding, strong encryption, Free/Open
Source Software and careful engineering make Tahoe-LAFS safer
than other storage technologies.

View File

@ -2,41 +2,41 @@ ANNOUNCING Tahoe, the Least-Authority File System, v1.6
The Tahoe-LAFS team is pleased to announce the immediate
availability of version 1.6 of Tahoe-LAFS, an extremely
reliable distributed key-value store and cloud storage system.
reliable distributed key-value store and cloud filesystem.
Tahoe-LAFS is the first cloud storage system which offers
"provider-independent security" -- meaning the privacy and
security of your data is not dependent on the behavior of your
cloud service provider. Here is the one-page explanation of its
unique security and fault-tolerance properties:
"provider-independent security" -- meaning that not even your
cloud service provider can read or alter your data without your
consent. Here is the one-page explanation of its unique
security and fault-tolerance properties:
http://allmydata.org/source/tahoe/trunk/docs/about.html
Tahoe-LAFS v1.6.0 is the successor to v1.5.0, which was
released August 1, 2009 [1]. In this major new release, we've
added deep-immutable directories (i.e. permanent snapshots),
greatly increased performance for some common operations, and
improved the help text, documentation, command-line options,
and web user interface. The FUSE plugin has been fixed. We also
fixed a few bugs. See the NEWS file [2] for details.
In addition to the core storage system itself, a crop of
related projects have sprung up to extend it and to integrate
it into operating systems and applications. These include
frontends for Windows, Macintosh, JavaScript, and iPhone, and
plugins for Hadoop, bzr, duplicity, TiddlyWiki, and more. See
the Related Projects page on the wiki [3].
released August 1, 2009 [1]. This release offers major
performance improvements, usability improvements, and one major
new feature: deep-immutable directories (cryptographically
unalterable permanent snapshots), See the NEWS file [2] for
details.
WHAT IS IT GOOD FOR?
With Tahoe-LAFS, you distribute your filesystem across multiple
With Tahoe-LAFS, you spread your filesystem across multiple
servers, and even if some of the servers fail or are taken over
by an attacker, the entire filesystem continues to work
correctly, and continues to preserve your privacy and
security. You can easily and securely share chosen files and
directories with others.
In addition to the core storage system itself, volunteers have
developed related projects to integrate it with other
tools. These include frontends for Windows, Macintosh,
JavaScript, and iPhone, and plugins for Hadoop, bzr, duplicity,
TiddlyWiki, and more. As of this release, contributors have
added an Android frontend and a working read-only FUSE
frontend. See the Related Projects page on the wiki [3].
We believe that the combination of erasure coding, strong
encryption, Free/Open Source Software and careful engineering
make Tahoe-LAFS safer than RAID, removable drive, tape, on-line
@ -112,17 +112,18 @@ SPONSORSHIP
Tahoe-LAFS was originally developed thanks to the sponsorship
of Allmydata, Inc. [12], a provider of commercial backup
services. Allmydata, Inc. created the Tahoe-LAFS project and
services. Allmydata founded the Tahoe-LAFS project and
contributed hardware, software, ideas, bug reports,
suggestions, demands, and money (employing several Tahoe-LAFS
hackers and instructing them to spend part of their work time
on this Free Software project). Also they awarded customized
suggestions, demands, and they employedg several Tahoe-LAFS
hackers and instructed them to spend part of their work time on
this Free Software project. Also they awarded customized
t-shirts to hackers who found security flaws in Tahoe-LAFS (see
http://hacktahoe.org ). After discontinuing funding of
the Hack Tahoe-LAFS Hall Of Fame [13]). After discontinuing funding of
Tahoe-LAFS R&D in early 2009, Allmydata, Inc. has continued to
provide servers, co-lo space, bandwidth, and thank-you gifts to
the open source project. Thank you to Allmydata, Inc. for their
generous and public-spirited support.
provide servers, co-lo space, bandwidth, and small personal
gifts as tokens of appreciation. (Also they continue to provide
bug reports.) Thank you to Allmydata, Inc. for their generous
and public-spirited support.
This is the third release of Tahoe-LAFS to be created solely as
a labor of love by volunteers. Thank you very much to the
@ -132,11 +133,11 @@ Tahoe-LAFS possible.
Zooko Wilcox-O'Hearn
on behalf of the Tahoe-LAFS team
January 31 2010
February 1, 2010
Boulder, Colorado, USA
[1] http://allmydata.org/trac/tahoe/browser/relnotes.txt?rev=4042
[2] http://allmydata.org/trac/tahoe/browser/NEWS?rev=4033
[2] http://allmydata.org/trac/tahoe/browser/NEWS?rev=4189
[3] http://allmydata.org/trac/tahoe/wiki/RelatedProjects
[4] http://allmydata.org/trac/tahoe/browser/docs/known_issues.txt
[5] http://allmydata.org/trac/tahoe/browser/COPYING.GPL
@ -144,6 +145,7 @@ Boulder, Colorado, USA
[7] http://allmydata.org/source/tahoe/trunk/docs/install.html
[8] http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev
[9] http://allmydata.org/trac/tahoe/roadmap
[10] http://allmydata.org/trac/tahoe/browser/CREDITS?rev=4035
[10] http://allmydata.org/trac/tahoe/browser/CREDITS?rev=4186
[11] http://allmydata.org/trac/tahoe/wiki/Dev
[12] http://allmydata.com
[13] http://hacktahoe.org