mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2024-12-19 04:57:54 +00:00
The Tahoe-LAFS decentralized secure filesystem.
bin | ||
dapper/debian | ||
docs | ||
feisty/debian | ||
misc | ||
sid/debian | ||
src | ||
twisted/plugins | ||
.darcs-boringfile | ||
COPYING | ||
GNUmakefile | ||
MANIFEST.in | ||
README | ||
relnotes.txt | ||
roadmap.txt | ||
setup.py |
Welcome to the AllMyData "tahoe" project. This project implements a secure, distributed, fault-tolerant storage mesh. The basic idea is that the data in this storage mesh is spread over all participating nodes, using an algorithm that can recover the data even if a majority of the nodes are no longer available. The interface to the storage mesh allows you to store and fetch files, either by self-authenticating cryptographic identifier or by filename and path. GETTING THE SOURCE CODE: The code is available via darcs by running the following command: darcs get http://allmydata.org/source/tahoe/trunk See http://allmydata.org for all kinds of information, news, and community contributions. LICENCE: Tahoe is offered under the GNU General Public License (v2 or later), with the added permission that, if you become obligated to release a derived work under this licence (as per section 2.b), you may delay the fulfillment of this obligation for up to 12 months. See the COPYING file for details. DEPENDENCIES: Note: All of the following dependencies can probably be installed through your standard package management tool if you are running on a modern Unix operating system. If you are running any modern Linux or *BSD distribution then you can almost certainly get them through your standard package manager. If you are running Mac OS X then the "fink" package management tool does not have most of these packages, but the "darwinports" package management tool appears to have them. If you are running on Windows then I'm afraid you'll have to install them by hand (although the "cygwin" package management tool does have some of them). If you are running on Solaris, I would like to hear from you -- I have no idea how it is done on Solaris nowadays. * a C compiler (language) * GNU make (build tool) * Python 2.4 or newer (tested against 2.4, and 2.5.1, but v2.5 or higher is required on Windows-native), including development headers (language) http://python.org/ * Python Twisted (tested against both 2.4 and 2.5) (network and operating system integration library) http://twistedmatrix.com/ You need the following subpackages, which are included in the default Twisted distribution: * core (the standard Twisted package) * web, trial, conch Twisted requires zope.interface, a copy of which is included in the Twisted distribution. * Python Nevow (probably 0.9.0 or later) (web presentation language) http://divmod.org/trac/wiki/DivmodNevow * Python setuptools (build and distribution tool) http://peak.telecommunity.com/DevCenter/EasyInstall#installation-instructions * Python PyOpenSSL (0.6 or later) (secure transport layer) http://pyopenssl.sourceforge.net To install PyOpenSSL on Windows-native, download this: http://allmydata.org/source/pyOpenSSL-0.6.win32-py2.5.exe * to build the debian packages you will need all the usual debian-packaging tools, which means the 'build-essential' metapackage and all of the packages listed as "Build-Depends" in DIST/debian/control for your distribution. You will also want the 'fakeroot' package to allow the top-level 'make deb-DIST' targets work. * on Windows, the pywin32 package http://sourceforge.net/projects/pywin32/ BUILDING: Just type 'make'. This works on Windows too, provided that you have the dependencies mentioned above (either a normal cygwin build or a mingw-style native build is supported by the makefile -- the cygwin build is the default). If the desired version of 'python' is not already on your PATH, then type 'make PYTHON=/path/to/your/preferred/python'. 'make test' runs the unit test suite. INSTALLING: The Debian Way: If you're running on a debian system, use 'make deb-dapper', 'make deb-sid', 'make deb-edgy', or 'make deb-feisty' to construct a debian package named 'allmydata-tahoe', which you can then install. The Python Way: You'll need to run four separate install steps, one for each of the four subpackages (allmydata, allmydata.Crypto, foolscap, and zfec). If you use GNU stow, add the options "--prefix=." and "--root=/usr/local/stow/${PACKAGE}" to the "setup.py install" command. for PACKAGE in zfec Crypto foolscap ; do cd src/${PACKAGE} && python setup.py install && cd ../.. done # the tahoe subpackage's setup.py script is in the root directory PACKAGE=tahoe python setup.py install The Running-In-Place Way: To run from a source tree (without installing first), type 'make', which will put all the necessary libraries into a local directory named "./instdir/", which you can then add to your PYTHONPATH . To Test That It Is Properly Installed: To test that all the modules got installed properly, start a python interpreter and import modules as follows: % python Python 2.4.4 (#2, Jan 13 2007, 17:50:26) [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import zfec >>> import allmydata.Crypto >>> import foolscap >>> import allmydata.interfaces RUNNING: If you installed one of the debian packages constructed by "make deb-*" then it creates an 'allmydata-tahoe' executable, usually in /usr/bin . If you didn't install a package you can find allmydata-tahoe in bin/ . This tool is used to create, start, and stop nodes. Each node lives in a separate base directory, inside of which you can add files to configure and control the node. Nodes also read and write files within that directory. A mesh consists of a single central 'introducer and vdrive' node and a large number of 'client' nodes. If you are joining an existing mesh, the introducer-and-vdrive node will already be running, and you'll just need to create a client node. If you're creating a brand new mesh, you'll need to create both an introducer-and-vdrive and a client (and then invite other people to create their own client nodes and join your mesh). The introducer (-and-vdrive) node is constructed by running 'allmydata-tahoe create-introducer --basedir $HERE'. Once constructed, you can start the introducer by running 'allmydata-tahoe start --basedir $HERE' (or, if you are already in the introducer's base directory, just type 'allmydata-tahoe start'). Inside that base directory, there will be a pair of files 'introducer.furl' and 'vdrive.furl'. Make a copy of these, as they'll be needed on the client nodes. To construct a client node, pick a new working directory for it, then run 'allmydata-tahoe create-client --basedir $HERE'. Copy the two .furl files from the introducer into this new directory, then run 'allmydata-tahoe start --basedir $HERE'. After that, the client node should be off and running. The first thing it will do is connect to the introducer and introduce itself to all other nodes on the mesh. You can follow its progress by looking at the $HERE/twistd.log file. To actually use the client, enable the web interface by writing a port number (like "8080") into a file named $HERE/webport and then restarting the node with 'allmydata-tahoe restart --basedir $HERE'. This will prompt the client node to run a webserver on the desired port, through which you can view, upload, download, and delete files. A client node directory can also be created without installing the code first. Just use 'make create-client', and a new directory named 'CLIENTDIR' will be created inside the top of the source tree. Copy the relevant .furl files in, set the webport, then start the node by using 'make start-client'. To stop it again, use 'make stop-client'. Similar makefile targets exist for making and running an introducer node. There is a public mesh available for testing. Look at the wiki page (http://allmydata.org) for the necessary .furl data.