format docs for Sphinx

Added indexes, fixed cross-references.

Also a few pip-related cleanups I noticed along the way.
This commit is contained in:
Brian Warner 2016-03-30 00:55:21 -07:00
parent 142185bb86
commit f81900ee35
30 changed files with 284 additions and 318 deletions

View File

@ -1,11 +1,11 @@
.. -*- coding: utf-8-with-signature-unix; fill-column: 77 -*-
******************
Getting Tahoe-LAFS
******************
*********************
Installing Tahoe-LAFS
*********************
Welcome to `the Tahoe-LAFS project`_, a secure, decentralized, fault-tolerant
storage system. See `<about.rst>`_ for an overview of the architecture and
storage system. See :doc:`about` for an overview of the architecture and
security properties of the system.
This procedure should work on Windows, Mac, OpenSolaris, and too many flavors
@ -30,11 +30,11 @@ Pre-Packaged Versions
You may not need to build Tahoe at all.
If you are on Windows, please see `<windows.rst>`_ for platform-specific
If you are on Windows, please see :doc:`windows` for platform-specific
instructions.
If you are on a Mac, you can either follow these instructions, or use the
pre-packaged bundle described in `<OS-X.rst>`_. The Tahoe project hosts
pre-packaged bundle described in :doc:`OS-X`. The Tahoe project hosts
pre-compiled "wheels" for all dependencies, so use the ``--find-links=``
option described below to avoid needing a compiler.
@ -299,4 +299,4 @@ Using Tahoe-LAFS
Now you are ready to deploy a decentralized filesystem. You will use the
``tahoe`` executable to create, configure, and launch your Tahoe-LAFS nodes.
See `<running.rst>`_ for instructions on how to do that.
See :doc:`running` for instructions on how to do that.

View File

@ -16,8 +16,8 @@ Installers are available from this directory:
Download the latest .pkg file to your computer and double-click on it. This
will install to /Applications/tahoe.app, however the app icon there is not
how you use Tahoe (launching it will get you a dialog box with a reminder to
use Terminal). `/Applications/tahoe.app/bin/tahoe` is the executable. The
use Terminal). ``/Applications/tahoe.app/bin/tahoe`` is the executable. The
next shell you start ought to have that directory in your $PATH (thanks to a
file in `/etc/paths.d/`), unless your `.profile` overrides it.
file in ``/etc/paths.d/``), unless your ``.profile`` overrides it.
Tahoe-LAFS is also easy to install with pip, as described in the README.

View File

@ -127,9 +127,7 @@ For more technical detail, please see the `the doc page`_ on the Wiki.
Get Started
===========
To use Tahoe-LAFS, please see INSTALL.rst_.
.. _INSTALL.rst: INSTALL.rst
To use Tahoe-LAFS, please see :doc:`INSTALL`.
License
=======

View File

@ -47,7 +47,7 @@ provide read-only access to those files, allowing users to recover them.
There are several other applications built on top of the Tahoe-LAFS
filesystem (see the RelatedProjects_ page of the wiki for a list).
.. _docs/specifications directory: specifications
.. _docs/specifications directory: https://github.com/tahoe-lafs/tahoe-lafs/tree/master/docs/specifications
.. _RelatedProjects: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/RelatedProjects
The Key-Value Store
@ -316,9 +316,7 @@ commercially-run grid for which all of the storage servers are in a colo
facility with high interconnect bandwidth. In this case, the helper is placed
in the same facility, so the helper-to-storage-server bandwidth is huge.
See helper.rst_ for details about the upload helper.
.. _helper.rst: helper.rst
See :doc:`helper` for details about the upload helper.
The Filesystem Layer
@ -370,11 +368,8 @@ clients are responsible for renewing their leases on a periodic basis at
least frequently enough to prevent any of the leases from expiring before the
next renewal pass.
See garbage-collection.rst_ for further information, and for how to configure
garbage collection.
.. _garbage-collection.rst: garbage-collection.rst
See :doc:`garbage-collection` for further information, and for how to
configure garbage collection.
File Repairer
=============

View File

@ -4,9 +4,7 @@
Things To Be Careful About As We Venture Boldly Forth
=======================================================
See also known_issues.rst_.
.. _known_issues.rst: known_issues.rst
See also :doc:`known_issues`.
Timing Attacks
==============

View File

@ -104,7 +104,7 @@ set the ``tub.location`` option described below.
This controls where the node's web server should listen, providing node
status and, if the node is a client/server, providing web-API service as
defined in webapi.rst_.
defined in :doc:`frontends/webapi`.
This file contains a Twisted "strports" specification such as "``3456``"
or "``tcp:3456:interface=127.0.0.1``". The "``tahoe create-node``" or
@ -297,8 +297,6 @@ set the ``tub.location`` option described below.
used for files that usually (on a Unix system) go into ``/tmp``. The
string will be interpreted relative to the node's base directory.
.. _webapi.rst: frontends/webapi.rst
Client Configuration
====================
@ -316,7 +314,7 @@ Client Configuration
``helper.furl = (FURL string, optional)``
If provided, the node will attempt to connect to and use the given helper
for uploads. See helper.rst_ for details.
for uploads. See :doc:`helper` for details.
``key_generator.furl = (FURL string, optional)``
@ -352,7 +350,7 @@ Client Configuration
ratios are more reliable, and small ``N``/``k`` ratios use less disk
space. ``N`` cannot be larger than 256, because of the 8-bit
erasure-coding algorithm that Tahoe-LAFS uses. ``k`` can not be greater
than ``N``. See performance.rst_ for more details.
than ``N``. See :doc:`performance` for more details.
``shares.happy`` allows you control over how well to "spread out" the
shares of an immutable file. For a successful upload, shares are
@ -390,11 +388,7 @@ Client Configuration
controlled by this parameter and will always use SDMF. We may revisit
this decision in future versions of Tahoe-LAFS.
See mutable.rst_ for details about mutable file formats.
.. _helper.rst: helper.rst
.. _performance.rst: performance.rst
.. _mutable.rst: specifications/mutable.rst
See :doc:`specifications/mutable` for details about mutable file formats.
``peers.preferred = (string, optional)``
@ -436,33 +430,28 @@ HTTP
directories and files, as well as a number of pages to check on the
status of your Tahoe node. It also provides a machine-oriented "WAPI",
with a REST-ful HTTP interface that can be used by other programs
(including the CLI tools). Please see webapi.rst_ for full details, and
the ``web.port`` and ``web.static`` config variables above. The
`download-status.rst`_ document also describes a few WUI status pages.
(including the CLI tools). Please see :doc:`frontends/webapi` for full
details, and the ``web.port`` and ``web.static`` config variables above.
:doc:`frontends/download-status` also describes a few WUI status pages.
CLI
The main "bin/tahoe" executable includes subcommands for manipulating the
The main ``tahoe`` executable includes subcommands for manipulating the
filesystem, uploading/downloading files, and creating/running Tahoe
nodes. See CLI.rst_ for details.
nodes. See :doc:`frontends/CLI` for details.
SFTP, FTP
Tahoe can also run both SFTP and FTP servers, and map a username/password
pair to a top-level Tahoe directory. See FTP-and-SFTP.rst_ for
instructions on configuring these services, and the ``[sftpd]`` and
pair to a top-level Tahoe directory. See :doc:`frontends/FTP-and-SFTP`
for instructions on configuring these services, and the ``[sftpd]`` and
``[ftpd]`` sections of ``tahoe.cfg``.
Drop-Upload
As of Tahoe-LAFS v1.9.0, a node running on Linux can be configured to
automatically upload files that are created or changed in a specified
local directory. See drop-upload.rst_ for details.
.. _download-status.rst: frontends/download-status.rst
.. _CLI.rst: frontends/CLI.rst
.. _FTP-and-SFTP.rst: frontends/FTP-and-SFTP.rst
.. _drop-upload.rst: frontends/drop-upload.rst
local directory. See :doc:`frontends/drop-upload` for details.
Storage Server Configuration
@ -522,10 +511,9 @@ Storage Server Configuration
These settings control garbage collection, in which the server will
delete shares that no longer have an up-to-date lease on them. Please see
garbage-collection.rst_ for full details.
:doc:`garbage-collection` for full details.
.. _#390: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/390
.. _garbage-collection.rst: garbage-collection.rst
Running A Helper
@ -538,12 +526,12 @@ service.
``enabled = (boolean, optional)``
If ``True``, the node will run a helper (see helper.rst_ for details).
If ``True``, the node will run a helper (see :doc:`helper` for details).
The helper's contact FURL will be placed in ``private/helper.furl``, from
which it can be copied to any clients that wish to use it. Clearly nodes
should not both run a helper and attempt to use one: do not create
``helper.furl`` and also define ``[helper]enabled`` in the same node.
The default is ``False``.
``helper.furl`` and also define ``[helper]enabled`` in the same node. The
default is ``False``.
Running An Introducer
@ -634,7 +622,7 @@ This section describes these other files.
``private/helper.furl``
If the node is running a helper (for use by other clients), its contact
FURL will be placed here. See helper.rst_ for more details.
FURL will be placed here. See :doc:`helper` for more details.
``private/root_dir.cap`` (optional)
@ -696,7 +684,7 @@ Other files
files. The web-API has a facility to block access to filecaps by their
storage index, returning a 403 "Forbidden" error instead of the original
file. For more details, see the "Access Blacklist" section of
webapi.rst_.
:doc:`frontends/webapi`.
Example
@ -739,6 +727,4 @@ Old Configuration Files
Tahoe-LAFS releases before v1.3.0 had no ``tahoe.cfg`` file, and used
distinct files for each item. This is no longer supported and if you have
configuration in the old format you must manually convert it to the new
format for Tahoe-LAFS to detect it. See `historical/configuration.rst`_.
.. _historical/configuration.rst: historical/configuration.rst
format for Tahoe-LAFS to detect it. See :doc:`historical/configuration`.

View File

@ -35,8 +35,8 @@ Dependency Packages
Tahoe depends upon a number of additional libraries. When building Tahoe from
source, any dependencies that are not already present in the environment will
be downloaded (via ``easy_install``) and stored in the ``support/lib``
directory.
be downloaded (via ``pip`` and ``easy_install``) and installed in the
virtualenv.
The ``.deb`` packages, of course, rely solely upon other ``.deb`` packages.
For reference, here is a list of the debian package names that provide Tahoe's

View File

@ -13,7 +13,7 @@ https://pip.pypa.io/en/stable/user_guide/#installing-from-local-packages .
Before you get shipwrecked (or leave the internet for a while), do this from
your tahoe source tree:
* `pip download --dest tahoe-deps .`
* ``pip download --dest tahoe-deps .``
That will create a directory named "tahoe-deps", and download everything that
the current project (".", i.e. tahoe) needs. It will fetch wheels if
@ -21,10 +21,10 @@ available, otherwise it will fetch tarballs. It will not compile anything.
Later, on the plane, do this (in an active virtualenv):
* `pip install --no-index --find-links=tahoe-deps --editable .`
* ``pip install --no-index --find-links=tahoe-deps --editable .``
That tells pip to not try to contact PyPI (--no-index) and to use the
tarballs and wheels in `tahoe-deps/` instead. That will compile anything
tarballs and wheels in ``tahoe-deps/`` instead. That will compile anything
necessary, create (and cache) wheels, and install them.
If you need to rebuild the virtualenv for whatever reason, run the "pip
@ -37,7 +37,7 @@ Compiling Ahead Of Time
If you want to save some battery on the flight, you can compile the wheels
ahead of time. Just do the install step before you go offline. The wheels
will be cached as a side-effect. Later, on the plane, you can populate a new
virtualenv with the same `pip install` command above, and it will use the
virtualenv with the same ``pip install`` command above, and it will use the
cached wheels instead of recompiling them.
The pip wheel cache
@ -51,21 +51,21 @@ HTTP cache, it will not actually download anything. Then it tries to build a
wheel: if it already has one in the wheel cache (downloaded or built
earlier), it will not actually build anything.
If it cannot contact PyPI, it will fail. The `--no-index` above is to tell it
to skip the PyPI step, but that leaves it with no source of packages. The
`--find-links=` argument is what provides an alternate source of packages.
If it cannot contact PyPI, it will fail. The ``--no-index`` above is to tell
it to skip the PyPI step, but that leaves it with no source of packages. The
``--find-links=`` argument is what provides an alternate source of packages.
The HTTP and wheel caches are not single flat directories: they use a
hierarchy of subdirectories, named after a hash of the URL or name of the
object being stored (this is to avoid filesystem limitations on the size of a
directory). As a result, the wheel cache is not suitable for use as a
`--find-links=` target (but see below).
``--find-links=`` target (but see below).
There is a command named `pip wheel` which only creates wheels (and stores
them in `--wheel-dir=`, which defaults to the current directory). This
There is a command named ``pip wheel`` which only creates wheels (and stores
them in ``--wheel-dir=``, which defaults to the current directory). This
command does not populate the wheel cache: it reads from (and writes to) the
HTTP cache, and reads from the wheel cache, but will only save the generated
wheels into the directory you specify with `--wheel-dir=`. It does not also
wheels into the directory you specify with ``--wheel-dir=``. It does not also
write them to the cache.
Where Does The Cache Live?
@ -75,53 +75,53 @@ Pip's cache location depends upon the platform. On linux, it defaults to
~/.cache/pip/ (both http/ and wheels/). On OS-X (homebrew), it uses
~/Library/Caches/pip/ . On Windows, try ~\AppData\Local\pip\cache .
The location can be overridden by `pip.conf`. Look for the "wheel-dir",
The location can be overridden by ``pip.conf``. Look for the "wheel-dir",
"cache-dir", and "find-links" options.
How Can I Tell If It's Using The Cache?
---------------------------------------
When "pip install" has to download a source tarball (and build a wheel), it
will say things like:
will say things like::
* Collecting zfec
* Downloading zfec-1.4.24.tar.gz (175kB)
* Building wheels for collected packages: zfec
* Running setup.py bdist_wheel for zfec ... done
* Stored in directory: $CACHEDIR
* Successfully built zfec
* Installing collected packages: zfec
* Successfully installed zfec-1.4.24
Collecting zfec
Downloading zfec-1.4.24.tar.gz (175kB)
Building wheels for collected packages: zfec
Running setup.py bdist_wheel for zfec ... done
Stored in directory: $CACHEDIR
Successfully built zfec
Installing collected packages: zfec
Successfully installed zfec-1.4.24
When "pip install" can use a cached downloaded tarball, but does not have a
cached wheel, it will say:
cached wheel, it will say::
* Collecting zfec
* Using cached zfec-1.4.24.tar.gz
* Building wheels for collected packages: zfec
* Running setup.py bdist_wheel for zfec ... done
* Stored in directory: $CACHEDIR
* Successfully built zfec
* Installing collected packages: zfec
* Successfully installed zfec-1.4.24
Collecting zfec
Using cached zfec-1.4.24.tar.gz
Building wheels for collected packages: zfec
Running setup.py bdist_wheel for zfec ... done
Stored in directory: $CACHEDIR
Successfully built zfec
Installing collected packages: zfec
Successfully installed zfec-1.4.24
When "pip install" can use a cached wheel, it will just say:
When "pip install" can use a cached wheel, it will just say::
* Collecting zfec
* Installed collected packages: zfec
* Successfully installed zfec-1.4.24
Collecting zfec
Installed collected packages: zfec
Successfully installed zfec-1.4.24
Many packages publish pre-built wheels next to their source tarballs. This is
common for non-platform-specific (pure-python) packages. It is also common
for them to provide pre-compiled windows and OS-X wheel, so users do not have
to have a compiler installed (pre-compiled Linux wheels are not common,
because there are too many platform variations). When "pip install" can use a
downloaded wheel like this, it will say:
downloaded wheel like this, it will say::
* Collecting six
* Downloading six-1.10.0-py2.py3-none-any.whl
* Installing collected packages: six
* Successfully installed six-1.10.0
Collecting six
Downloading six-1.10.0-py2.py3-none-any.whl
Installing collected packages: six
Successfully installed six-1.10.0
Note that older versions of pip do not always use wheels, or the cache. Pip
8.0.0 or newer should be ok. The version of setuptools may also be
@ -130,39 +130,39 @@ significant.
Another Approach
----------------
An alternate approach is to set your `pip.conf` to install wheels into the
same directory that it will search for links, and use `pip wheel` to add
wheels to the cache. The `pip.conf` will look like:
An alternate approach is to set your ``pip.conf`` to install wheels into the
same directory that it will search for links, and use ``pip wheel`` to add
wheels to the cache. The ``pip.conf`` will look like::
[global]
wheel-dir = ~/.pip/wheels
find-links = ~/.pip/wheels
(see https://pip.pypa.io/en/stable/user_guide/#configuration to find out
where your `pip.conf` lives, but `~/.pip/pip.conf` probably works)
where your ``pip.conf`` lives, but ``~/.pip/pip.conf`` probably works)
While online, you populate the wheel-dir (from a tahoe source tree) with:
* `pip wheel .`
* ``pip wheel .``
That compiles everything, so it may take a little while. Note that you can
also add specific packages (and their dependencies) any time you like, with
something like `pip wheel zfec`.
something like ``pip wheel zfec``.
Later, you do the offline install (in a virtualenv) with just:
* `pip install --no-index --editable .`
* ``pip install --no-index --editable .``
If/when you have network access, omit the `--no-index` and it will check with
PyPI for the most recent versions (and still use the stashed wheels if
If/when you have network access, omit the ``--no-index`` and it will check
with PyPI for the most recent versions (and still use the stashed wheels if
appropriate).
The upside is that the only extra `pip install` argument is `--no-index`, and
you don't need to remember the `--find-links` or `--dest` arguments.
The upside is that the only extra ``pip install`` argument is ``--no-index``,
and you don't need to remember the ``--find-links`` or ``--dest`` arguments.
The downside of this approach is that `pip install` does not populate the
The downside of this approach is that ``pip install`` does not populate the
wheel-dir (it populates the normal wheel cache, but not ~/.pip/wheels). Only
an explicit `pip wheel` will populate ~/.pip/wheels. So if you do a `pip
install` (but not a `pip wheel`), then go offline, a second `pip install
--no-index` may fail: the wheels it needs may be somewhere in the
wheel-cache, but not in the `--find-links=` directory.
an explicit ``pip wheel`` will populate ~/.pip/wheels. So if you do a ``pip
install`` (but not a ``pip wheel``), then go offline, a second ``pip install
--no-index`` may fail: the wheels it needs may be somewhere in the
wheel-cache, but not in the ``--find-links=`` directory.

View File

@ -23,21 +23,10 @@ The Tahoe-LAFS CLI commands
Overview
========
Tahoe-LAFS provides a single executable named "``tahoe``", which can be used to
create and manage client/server nodes, manipulate the filesystem, and perform
several debugging/maintenance tasks.
This executable lives in the source tree at "``bin/tahoe``". Once you've done a
build (by running "``make``" or "``python setup.py build``"), ``bin/tahoe`` can
be run in-place: if it discovers that it is being run from within a Tahoe-LAFS
source tree, it will modify ``sys.path`` as necessary to use all the source code
and dependent libraries contained in that tree.
If you've installed Tahoe-LAFS (using "``make install``" or
"``python setup.py install``", or by installing a binary package), then the
``tahoe`` executable will be available somewhere else, perhaps in
``/usr/bin/tahoe``. In this case, it will use your platform's normal
PYTHONPATH search path to find the Tahoe-LAFS code and other libraries.
Tahoe-LAFS provides a single executable named "``tahoe``", which can be used
to create and manage client/server nodes, manipulate the filesystem, and
perform several debugging/maintenance tasks. This executable is installed
into your virtualenv when you run ``pip install tahoe-lafs``.
CLI Command Overview
@ -145,7 +134,7 @@ Filesystem Manipulation
These commands let you exmaine a Tahoe-LAFS filesystem, providing basic
list/upload/download/unlink/rename/mkdir functionality. They can be used as
primitives by other scripts. Most of these commands are fairly thin wrappers
around web-API calls, which are described in `<webapi.rst>`__.
around web-API calls, which are described in :doc:`webapi`.
By default, all filesystem-manipulation commands look in ``~/.tahoe/`` to
figure out which Tahoe-LAFS node they should use. When the CLI command makes
@ -161,13 +150,12 @@ they ought to use a starting point. This is explained in more detail below.
Starting Directories
--------------------
As described in `docs/architecture.rst <../architecture.rst>`__, the
Tahoe-LAFS distributed filesystem consists of a collection of directories
and files, each of which has a "read-cap" or a "write-cap" (also known as
a URI). Each directory is simply a table that maps a name to a child file
or directory, and this table is turned into a string and stored in a
mutable file. The whole set of directory and file "nodes" are connected
together into a directed graph.
As described in :doc:`../architecture`, the Tahoe-LAFS distributed filesystem
consists of a collection of directories and files, each of which has a
"read-cap" or a "write-cap" (also known as a URI). Each directory is simply a
table that maps a name to a child file or directory, and this table is turned
into a string and stored in a mutable file. The whole set of directory and
file "nodes" are connected together into a directed graph.
To use this collection of files and directories, you need to choose a
starting point: some specific directory that we will refer to as a

View File

@ -229,8 +229,8 @@ be relinked to a different file. Normally, when the path of an immutable file
is opened for writing by SFTP, the directory entry is relinked to another
file with the newly written contents when the file handle is closed. The old
file is still present on the grid, and any other caps to it will remain
valid. (See `docs/garbage-collection.rst`_ for how to reclaim the space used
by files that are no longer needed.)
valid. (See :doc:`../garbage-collection` for how to reclaim the space used by
files that are no longer needed.)
The 'no-write' metadata field of a directory entry can override this
behaviour. If the 'no-write' field holds a true value, then a permission
@ -247,8 +247,6 @@ directory, that link will become read-only.
If SFTP is used to write to an existing mutable file, it will publish a new
version when the file handle is closed.
.. _docs/garbage-collection.rst: file:../garbage-collection.rst
Known Issues
============

View File

@ -66,12 +66,11 @@ in the upload directory with the same filename. A large file may take some
time to appear, since it is only linked into the directory after the upload
has completed.
The 'Operational Statistics' page linked from the Welcome page shows
counts of the number of files uploaded, the number of change events currently
The 'Operational Statistics' page linked from the Welcome page shows counts
of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node log_ may be helpful to determine the cause of any failures.
.. _log: ../logging.rst
page and the node :doc:`log<../logging>` may be helpful to determine the
cause of any failures.
Known Issues and Limitations
@ -134,7 +133,7 @@ with an immutable file. (`#1712`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps to
it will remain valid. See `docs/garbage-collection.rst`_ for how to reclaim
it will remain valid. See :doc:`../garbage-collection` for how to reclaim
the space used by files that are no longer needed.
Unicode names are supported, but the local name of a file must be encoded
@ -153,6 +152,3 @@ printed by ``python -c "import sys; print sys.getfilesystemencoding()"``.
.. _`#1710`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1710
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#1712`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1712
.. _docs/garbage-collection.rst: ../garbage-collection.rst

View File

@ -71,7 +71,7 @@ port 3456, on the loopback (127.0.0.1) interface.
Basic Concepts: GET, PUT, DELETE, POST
======================================
As described in `docs/architecture.rst`_, each file and directory in a Tahoe
As described in :doc:`../architecture`, each file and directory in a Tahoe
virtual filesystem is referenced by an identifier that combines the
designation of the object with the authority to do something with it (such as
read or modify the contents). This identifier is called a "read-cap" or
@ -128,7 +128,6 @@ a plain text stack trace instead. If the Accept header contains ``*/*``, or
be generated.
.. _RFC3986: https://tools.ietf.org/html/rfc3986
.. _docs/architecture.rst: ../architecture.rst
URLs
@ -193,12 +192,12 @@ servers is required, /uri should be used.
Child Lookup
------------
Tahoe directories contain named child entries, just like directories in a regular
local filesystem. These child entries, called "dirnodes", consist of a name,
metadata, a write slot, and a read slot. The write and read slots normally contain
a write-cap and read-cap referring to the same object, which can be either a file
or a subdirectory. The write slot may be empty (actually, both may be empty,
but that is unusual).
Tahoe directories contain named child entries, just like directories in a
regular local filesystem. These child entries, called "dirnodes", consist of
a name, metadata, a write slot, and a read slot. The write and read slots
normally contain a write-cap and read-cap referring to the same object, which
can be either a file or a subdirectory. The write slot may be empty
(actually, both may be empty, but that is unusual).
If you have a Tahoe URL that refers to a directory, and want to reference a
named child inside it, just append the child name to the URL. For example, if
@ -352,11 +351,11 @@ Reading a File
The "Range:" header can be used to restrict which portions of the file are
returned (see RFC 2616 section 14.35.1 "Byte Ranges"), however Tahoe only
supports a single "bytes" range and never provides a `multipart/byteranges`
response. An attempt to begin a read past the end of the file will provoke a
416 Requested Range Not Satisfiable error, but normal overruns (reads which
start at the beginning or middle and go beyond the end) are simply
truncated.
supports a single "bytes" range and never provides a
``multipart/byteranges`` response. An attempt to begin a read past the end
of the file will provoke a 416 Requested Range Not Satisfiable error, but
normal overruns (reads which start at the beginning or middle and go beyond
the end) are simply truncated.
To view files in a web browser, you may want more control over the
Content-Type and Content-Disposition headers. Please see the next section
@ -512,7 +511,7 @@ Creating a New Directory
The metadata may have a "no-write" field. If this is set to true in the
metadata of a link, it will not be possible to open that link for writing
via the SFTP frontend; see FTP-and-SFTP.rst_ for details. Also, if the
via the SFTP frontend; see :doc:`FTP-and-SFTP` for details. Also, if the
"no-write" field is set to true in the metadata of a link to a mutable
child, it will cause the link to be diminished to read-only.
@ -671,8 +670,6 @@ Creating a New Directory
This operation will return an error if the parent directory is immutable,
or already has a child named NAME.
.. _FTP-and-SFTP.rst: FTP-and-SFTP.rst
Getting Information About a File Or Directory (as JSON)
-------------------------------------------------------
@ -1060,10 +1057,10 @@ Viewing/Downloading a File
most browsers will refuse to display it inline). "true", "t", "1", and other
case-insensitive equivalents are all treated the same.
Character-set handling in URLs and HTTP headers is a dubious art [1]_. For
maximum compatibility, Tahoe simply copies the bytes from the filename=
argument into the Content-Disposition header's filename= parameter, without
trying to interpret them in any particular way.
Character-set handling in URLs and HTTP headers is a :ref:`dubious
art<urls-and-utf8>`. For maximum compatibility, Tahoe simply copies the
bytes from the filename= argument into the Content-Disposition header's
filename= parameter, without trying to interpret them in any particular way.
``GET /named/$FILECAP/FILENAME``
@ -1313,9 +1310,9 @@ Relinking ("Moving") a Child
is still reachable through any path from which it was formerly reachable,
and the storage space occupied by its ciphertext is not affected.
The source and destination directories must be writeable. If {{{to_dir}}} is
The source and destination directories must be writeable. If ``to_dir`` is
not present, the child link is renamed within the same directory. If
{{{to_name}}} is not present then it defaults to {{{from_name}}}. If the
``to_name`` is not present then it defaults to ``from_name``. If the
destination link (directory and name) is the same as the source link, the
operation has no effect.
@ -1424,7 +1421,7 @@ mainly intended for developers.
True. For distributed files, this dictionary has the following
keys:
count-happiness: the servers-of-happiness level of the file, as
defined in `docs/specifications/servers-of-happiness.rst`_.
defined in doc/specifications/servers-of-happiness.
count-shares-good: the number of good shares that were found
count-shares-needed: 'k', the number of shares required for recovery
count-shares-expected: 'N', the number of total shares generated
@ -1470,10 +1467,9 @@ mainly intended for developers.
'seq%d-%s-sh%d', containing the sequence number, the
roothash, and the share number.
Before Tahoe-LAFS v1.11, the `results` dictionary also had a `needs-rebalancing`
field, but that has been removed since it was computed incorrectly.
.. _`docs/specifications/servers-of-happiness.rst`: ../specifications/servers-of-happiness.rst
Before Tahoe-LAFS v1.11, the ``results`` dictionary also had a
``needs-rebalancing`` field, but that has been removed since it was computed
incorrectly.
``POST $URL?t=start-deep-check`` (must add &ophandle=XYZ)
@ -2085,9 +2081,7 @@ Tahoe-1.1; back with Tahoe-1.0 the web client was responsible for serializing
web requests themselves).
For more details, please see the "Consistency vs Availability" and "The Prime
Coordination Directive" sections of mutable.rst_.
.. _mutable.rst: ../specifications/mutable.rst
Coordination Directive" sections of :doc:`../specifications/mutable`.
Access Blacklist
@ -2134,8 +2128,10 @@ file/dir will bypass the blacklist.
The node will log the SI of the file being blocked, and the reason code, into
the ``logs/twistd.log`` file.
URLs and HTTP and UTF-8
=======================
.. [1] URLs and HTTP and UTF-8, Oh My
.. _urls-and-utf8:
HTTP does not provide a mechanism to specify the character set used to
encode non-ASCII names in URLs (`RFC3986#2.1`_). We prefer the convention

View File

@ -30,7 +30,11 @@ next renewal pass.
There are several tradeoffs to be considered when choosing the renewal timer
and the lease duration, and there is no single optimal pair of values. See
the lease-tradeoffs.svg_ diagram to get an idea for the tradeoffs involved.
the the follwing diagram to get an idea for the tradeoffs involved:
.. image:: lease-tradeoffs.svg
If lease renewal occurs quickly and with 100% reliability, than any renewal
time that is shorter than the lease duration will suffice, but a larger ratio
of duration-over-renewal-time will be more robust in the face of occasional
@ -46,7 +50,6 @@ processed) to something other than 31 days.
Renewing leases can be expected to take about one second per file/directory,
depending upon the number of servers and the network speeds involved.
.. _lease-tradeoffs.svg: lease-tradeoffs.svg
Client-side Renewal

View File

@ -13,14 +13,12 @@ Overview
========
As described in the "Swarming Download, Trickling Upload" section of
`architecture.rst`_, Tahoe uploads require more bandwidth than downloads: you
:doc:`architecture`, Tahoe uploads require more bandwidth than downloads: you
must push the redundant shares during upload, but you do not need to retrieve
them during download. With the default 3-of-10 encoding parameters, this
means that an upload will require about 3.3x the traffic as a download of the
same file.
.. _architecture.rst: architecture.rst
Unfortunately, this "expansion penalty" occurs in the same upstream direction
that most consumer DSL lines are slow anyways. Typical ADSL lines get 8 times
as much download capacity as upload capacity. When the ADSL upstream penalty
@ -132,9 +130,7 @@ Who should consider using a Helper?
To take advantage of somebody else's Helper, take the helper furl that they
give you, and edit your tahoe.cfg file. Enter the helper's furl into the
value of the key "helper.furl" in the "[client]" section of tahoe.cfg, as
described in the "Client Configuration" section of configuration.rst_.
.. _configuration.rst: configuration.rst
described in the "Client Configuration" section of :doc:`configuration`.
Then restart the node. This will signal the client to try and connect to the
helper. Subsequent uploads will use the helper rather than using direct

View File

@ -46,6 +46,12 @@ Contents:
OS-X
build/build-pyOpenSSL
specifications/index
proposed/index
filesystem-notes
historical/configuration
key-value-store
Indices and tables
==================

View File

@ -1,5 +1,9 @@
.. -*- coding: utf-8-with-signature-unix; fill-column: 77 -*-
********************************
Using Tahoe as a key-value store
********************************
There are several ways you could use Tahoe-LAFS as a key-value store.
Looking only at things that are *already implemented*, there are three

View File

@ -1,8 +1,6 @@
.. -*- coding: utf-8-with-signature -*-
See also `cautions.rst`_.
.. _cautions.rst: cautions.rst
See also :doc:`cautions.rst<cautions>`.
============
Known Issues
@ -10,13 +8,12 @@ Known Issues
Below is a list of known issues in recent releases of Tahoe-LAFS, and how to
manage them. The current version of this file can be found at
https://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/known_issues.rst .
https://github.com/tahoe-lafs/tahoe-lafs/blob/master/docs/known_issues.rst .
If you've been using Tahoe-LAFS since v1.1 (released 2008-06-11) or if you're
just curious about what sort of mistakes we've made in the past, then you might
want to read `the "historical known issues" document`_.
.. _the "historical known issues" document: historical/historical_known_issues.txt
just curious about what sort of mistakes we've made in the past, then you
might want to read the "historical known issues" document in
``docs/historical/historical_known_issues.txt``.
Known Issues in Tahoe-LAFS v1.10.2, released 30-Jul-2015
@ -219,9 +216,9 @@ To disable the filter in Chrome:
Known issues in the FTP and SFTP frontends
------------------------------------------
These are documented in `docs/frontends/FTP-and-SFTP.rst`_ and on `the SftpFrontend page`_ on the wiki.
These are documented in :doc:`frontends/FTP-and-SFTP` and on `the
SftpFrontend page`_ on the wiki.
.. _docs/frontends/FTP-and-SFTP.rst: frontends/FTP-and-SFTP.rst
.. _the SftpFrontend page: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend

View File

@ -13,9 +13,8 @@ Tahoe Logging
1. `Incident Gatherer`_
2. `Log Gatherer`_
6. `Local twistd.log files`_
7. `Adding log messages`_
8. `Log Messages During Unit Tests`_
6. `Adding log messages`_
7. `Log Messages During Unit Tests`_
Overview
========
@ -26,18 +25,11 @@ primarily for use by programmers and grid operators who want to find out what
went wrong.
The Foolscap logging system is documented at
`<http://foolscap.lothar.com/docs/logging.html>`__.
`<https://github.com/warner/foolscap/blob/latest-release/doc/logging.rst>`__.
The Foolscap distribution includes a utility named "``flogtool``" that is
used to get access to many Foolscap logging features. This command only
works when foolscap and its dependencies are installed correctly.
Tahoe-LAFS v1.10.0 and later include a ``tahoe debug flogtool`` command
that can be used even when foolscap is not installed; to use this, prefix
all of the example commands below with ``tahoe debug``.
For earlier versions since Tahoe-LAFS v1.8.2, installing Foolscap v0.6.1
or later and then running ``bin/tahoe @flogtool`` from the root of a
Tahoe-LAFS source distribution may work (but only on Unix, not Windows).
used to get access to many Foolscap logging features. ``flogtool`` should get
installed into the same virtualenv as the ``tahoe`` command.
Realtime Logging
@ -186,7 +178,7 @@ Create the Log Gatherer with the "``flogtool create-gatherer WORKDIR``"
command, and start it with "``tahoe start``". Then copy the contents of the
``log_gatherer.furl`` file it creates into the ``BASEDIR/tahoe.cfg`` file
(under the key ``log_gatherer.furl`` of the section ``[node]``) of all nodes
that should be sending it log events. (See `<configuration.rst>`__.)
that should be sending it log events. (See :doc:`configuration`)
The "``flogtool filter``" command, described above, is useful to cut down the
potentially large flogfiles into a more focussed form.
@ -198,7 +190,6 @@ the outbound TCP queue), publishing nodes will start dropping log events when
the outbound queue grows too large. When this occurs, there will be gaps
(non-sequential event numbers) in the log-gatherer's flogfiles.
Adding log messages
===================

19
docs/proposed/index.rst Normal file
View File

@ -0,0 +1,19 @@
Proposed Specifications
=======================
This directory is where we hold design notes about upcoming/proposed
features. Usually this is kept in tickets on the `bug tracker`_, but
sometimes we put this directly into the source tree.
.. _bug tracker: https://tahoe-lafs.org/trac
Most of these files are plain text, should be read from a source tree. This
index only lists the files that are in .rst format.
.. toctree::
:maxdepth: 2
leasedb
magic-folder/filesystem-integration
magic-folder/remote-to-local-sync
magic-folder/user-interface-design

View File

@ -132,8 +132,7 @@ shnum) pair. This entry stores the times when the lease was last renewed and
when it is set to expire (if the expiration policy does not force it to
expire earlier), represented as Unix UTC-seconds-since-epoch timestamps.
For more on expiration policy, see `docs/garbage-collection.rst
<../garbage-collection.rst>`__.
For more on expiration policy, see :doc:`../garbage-collection`.
Share states

View File

@ -18,6 +18,8 @@ Objective 2.
.. _otf-magic-folder-objective2: https://tahoe-lafs.org/trac/tahoe-lafs/query?status=!closed&keywords=~otf-magic-folder-objective2
.. _filesystem_integration-local-scanning-and-database:
*Local scanning and database*
When a Magic-Folder-enabled node starts up, it scans all directories

View File

@ -66,9 +66,9 @@ detect filesystem changes, we have no mechanism to register a monitor for
changes to a Tahoe-LAFS directory. Therefore, we must periodically poll
for changes.
An important constraint on the solution is Tahoe-LAFS' "`write
coordination directive`_", which prohibits concurrent writes by different
storage clients to the same mutable object:
An important constraint on the solution is Tahoe-LAFS' ":doc:`write
coordination directive<../../write_coordination>`", which prohibits
concurrent writes by different storage clients to the same mutable object:
Tahoe does not provide locking of mutable files and directories. If
there is more than one simultaneous attempt to change a mutable file
@ -79,8 +79,6 @@ storage clients to the same mutable object:
a time. One convenient way to accomplish this is to make a different
file or directory for each person or process that wants to write.
.. _`write coordination directive`: ../../write_coordination.rst
Since it is a goal to allow multiple users to write to a Magic Folder,
if the write coordination directive remains the same as above, then we
will not be able to implement the Magic Folder as a single Tahoe-LAFS
@ -141,14 +139,12 @@ Here is a summary of advantages and disadvantages of each design:
+-------+--------------------+
123456+: All designs have the property that a recursive add-lease
operation starting from a *collective directory* containing all of
the client DMDs, will find all of the files and directories used in
the Magic Folder representation. Therefore the representation is
compatible with `garbage collection`_, even when a pre-Magic-Folder
client does the lease marking.
.. _`garbage collection`: https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/garbage-collection.rst
123456+: All designs have the property that a recursive add-lease operation
starting from a *collective directory* containing all of the client DMDs,
will find all of the files and directories used in the Magic Folder
representation. Therefore the representation is compatible with :doc:`garbage
collection <../../garbage-collection>`, even when a pre-Magic-Folder client
does the lease marking.
123456+: All designs avoid "breaking" pre-Magic-Folder clients that read
a directory or file that is part of the representation.
@ -350,8 +346,9 @@ remote change has been initially classified as an overwrite.
.. _`Fire Dragons`: #fire-dragons-distinguishing-conflicts-from-overwrites
Note that writing a file that does not already have an entry in
the `magic folder db`_ is initially classed as an overwrite.
Note that writing a file that does not already have an entry in the
:ref:`magic folder db<filesystem_integration-local-scanning-and-database>` is
initially classed as an overwrite.
A *write/download collision* occurs when another program writes
to ``foo`` in the local filesystem, concurrently with the new
@ -408,19 +405,18 @@ and Windows. On Unix, it can be implemented as follows:
suppressing errors.
Note that, if there is no conflict, the entry for ``foo``
recorded in the `magic folder db`_ will reflect the ``mtime``
set in step 3. The move operation in step 4b will cause a
``MOVED_FROM`` event for ``foo``, and the link operation in
step 4c will cause an ``IN_CREATE`` event for ``foo``.
However, these events will not trigger an upload, because they
are guaranteed to be processed only after the file replacement
has finished, at which point the last-seen statinfo recorded
in the database entry will exactly match the metadata for the
file's inode on disk. (The two hard links — ``foo`` and, while
it still exists, ``.foo.tmp`` — share the same inode and
therefore the same metadata.)
.. _`magic folder db`: filesystem_integration.rst#local-scanning-and-database
recorded in the :ref:`magic folder
db<filesystem_integration-local-scanning-and-database>` will
reflect the ``mtime`` set in step 3. The move operation in step
4b will cause a ``MOVED_FROM`` event for ``foo``, and the link
operation in step 4c will cause an ``IN_CREATE`` event for
``foo``. However, these events will not trigger an upload,
because they are guaranteed to be processed only after the file
replacement has finished, at which point the last-seen statinfo
recorded in the database entry will exactly match the metadata
for the file's inode on disk. (The two hard links — ``foo``
and, while it still exists, ``.foo.tmp`` — share the same inode
and therefore the same metadata.)
On Windows, file replacement can be implemented by a call to
the `ReplaceFileW`_ API (with the
@ -722,7 +718,9 @@ In order to implement this policy, we need to specify how the
We propose to record this information:
* in the `magic folder db`_, for local files;
* in the :ref:`magic folder
db<filesystem_integration-local-scanning-and-database>`, for
local files;
* in the Tahoe-LAFS directory metadata, for files stored in the
Magic Folder.
@ -733,7 +731,7 @@ client downloads a file, it stores the downloaded version's URI and
the current local timestamp in this record. Since only immutable
files are used, the URI will be an immutable file URI, which is
deterministically and uniquely derived from the file contents and
the Tahoe-LAFS node's `convergence secret`_.
the Tahoe-LAFS node's :doc:`convergence secret<../../convergence-secret>`.
(Note that the last-downloaded record is updated regardless of
whether the download is an overwrite or a conflict. The rationale
@ -741,8 +739,6 @@ for this to avoid "conflict loops" between clients, where every
new version after the first conflict would be considered as another
conflict.)
.. _`convergence secret`: https://tahoe-lafs.org/trac/tahoe-lafs/browser/docs/convergence-secret.rst
Later, in response to a local filesystem change at a given path, the
Magic Folder client reads the last-downloaded record associated with
that path (if any) from the database and then uploads the current

View File

@ -39,11 +39,11 @@ read cap from a read/write cap).
Design Constraints
------------------
The design of the Tahoe-side representation of a Magic Folder, and the polling
mechanism that the Magic Folder clients will use to detect remote changes was
discussed in `<remote-to-local-sync.rst>`_, and we will not revisit that here.
The assumption made by that design was that each client would be configured with
the following information:
The design of the Tahoe-side representation of a Magic Folder, and the
polling mechanism that the Magic Folder clients will use to detect remote
changes was discussed in :doc:`remote-to-local-sync<remote-to-local-sync>`,
and we will not revisit that here. The assumption made by that design was
that each client would be configured with the following information:
* a write cap to its own *client DMD*.
* a read cap to a *collective directory*.
@ -63,12 +63,10 @@ modifying the collective directory in a way that would lose data. This motivates
ensuring that each client only has access to the caps above, rather than, say,
every client having a write cap to the collective directory.
Another important design constraint is that we cannot violate the
`write coordination directive`_; that is, we cannot write to the same mutable
directory from multiple clients, even during the setup phase when adding a
client.
.. _`write coordination directive`: ../../write_coordination.rst
Another important design constraint is that we cannot violate the :doc:`write
coordination directive<../../write_coordination>`; that is, we cannot write to
the same mutable directory from multiple clients, even during the setup phase
when adding a client.
Within these constraints, for usability we want to minimize the number of steps
required to configure a Magic Folder collective.
@ -177,14 +175,14 @@ to add, modify or delete any object within the Magic Folder,
we considered the potential security/reliability improvement
here not to be worth the loss of usability.
We also considered a design where each client had write access
to the collective directory. This would arguably be a more
serious violation of the Principle of Least Authority than the
one above (because all clients would have excess authority rather
than just the inviter). In any case, it was not clear how to make
such a design satisfy the `write coordination directive`_,
because the collective directory would have needed to be written
to by multiple clients.
We also considered a design where each client had write access to
the collective directory. This would arguably be a more serious
violation of the Principle of Least Authority than the one above
(because all clients would have excess authority rather than just
the inviter). In any case, it was not clear how to make such a
design satisfy the :doc:`write coordination
directive<../../write_coordination>`, because the collective
directory would have needed to be written to by multiple clients.
The reliance on a secure channel to send the invitation to its
intended recipient is not ideal, since it may involve additional

View File

@ -10,7 +10,7 @@ Introduction
This is how to run a Tahoe-LAFS client or a complete Tahoe-LAFS grid.
First you have to install the Tahoe-LAFS software, as documented in
INSTALL.rst_.
:doc:`INSTALL`.
The ``tahoe`` program in your virtualenv's ``bin`` directory is used to
create, start, and stop nodes. Each node lives in a separate base
@ -37,7 +37,7 @@ nodes on the grid.
By default, “``tahoe create-client``” creates a client-only node, that
does not offer its disk space to other nodes. To configure other behavior,
use “``tahoe create-node``” or see configuration.rst_.
use “``tahoe create-node``” or see :doc:`configuration`.
To construct an introducer, create a new base directory for it (the
name of the directory is up to you), ``cd`` into it, and run
@ -53,14 +53,12 @@ On Unix, you can run it in the background instead by using the
``tahoe start``” command. To stop a node started in this way, use
``tahoe stop``”. ``tahoe --help`` gives a summary of all commands.
See configuration.rst_ for more details about how to configure Tahoe-LAFS,
including how to get other clients to connect to your node if it is behind a
firewall or NAT device.
See :doc:`configuration` for more details about how to configure
Tahoe-LAFS, including how to get other clients to connect to your node if
it is behind a firewall or NAT device.
.. _INSTALL.rst: INSTALL.rst
.. _public test grid: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/TestGrid
.. _TestGrid page: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/TestGrid
.. _configuration.rst: configuration.rst
.. _#937: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/937
@ -68,10 +66,10 @@ A note about small grids
------------------------
By default, Tahoe-LAFS ships with the configuration parameter
``shares.happy`` set to 7. If you are using Tahoe-LAFS on a grid with fewer
than 7 storage nodes, this won't work well for you — none of your uploads
will succeed. To fix this, see configuration.rst_ to learn how to set
``shares.happy`` to a more suitable value for your grid.
``shares.happy`` set to 7. If you are using Tahoe-LAFS on a grid with
fewer than 7 storage nodes, this won't work well for you — none of your
uploads will succeed. To fix this, see :doc:`configuration` to learn how
to set ``shares.happy`` to a more suitable value for your grid.
Do Stuff With It
@ -97,14 +95,15 @@ then you can never again come back to this directory.
The CLI
-------
Prefer the command-line? Run “``tahoe --help``” (the same command-line tool
that is used to start and stop nodes serves to navigate and use the
decentralized filesystem). To get started, create a new directory and mark it
as the 'tahoe:' alias by running “``tahoe create-alias tahoe``”. Once you've
done that, you can do “``tahoe ls tahoe:``” and “``tahoe cp LOCALFILE
tahoe:foo.txt``” to work with your filesystem. The Tahoe-LAFS CLI uses
similar syntax to the well-known scp and rsync tools. See CLI.rst_ for more
details.
Prefer the command-line? Run “``tahoe --help``” (the same command-line
tool that is used to start and stop nodes serves to navigate and use the
decentralized filesystem). To get started, create a new directory and
mark it as the 'tahoe:' alias by running “``tahoe create-alias tahoe``”.
Once you've done that, you can do “``tahoe ls tahoe:``” and “``tahoe cp
LOCALFILE tahoe:foo.txt``” to work with your filesystem. The Tahoe-LAFS
CLI uses similar syntax to the well-known scp and rsync tools. See
:doc:`frontends/CLI` for more details.
To backup a directory full of files and subdirectories, run “``tahoe backup
LOCALDIRECTORY tahoe:``”. This will create a new LAFS subdirectory inside the
@ -124,7 +123,7 @@ and if it gets interrupted (for example by a network outage, or by you
rebooting your computer during the backup, or so on), it will resume right
where it left off the next time you run ``tahoe backup``.
See `<frontends/CLI.rst>`__ for more information about the ``tahoe backup``
See :doc:`frontends/CLI` for more information about the ``tahoe backup``
command, as well as other commands.
As with the WUI (and with all current interfaces to Tahoe-LAFS), you
@ -132,24 +131,21 @@ are responsible for remembering directory capabilities yourself. If you
create a new directory and lose the capability to it, then you cannot
access that directory ever again.
.. _CLI.rst: frontends/CLI.rst
The SFTP and FTP frontends
--------------------------
You can access your Tahoe-LAFS grid via any SFTP_ or FTP_ client.
See `FTP-and-SFTP.rst`_ for how to set
this up. On most Unix platforms, you can also use SFTP to plug
Tahoe-LAFS into your computer's local filesystem via ``sshfs``, but see
the `FAQ about performance problems`_.
You can access your Tahoe-LAFS grid via any SFTP_ or FTP_ client. See
:doc:`frontends/FTP-and-SFTP` for how to set this up. On most Unix
platforms, you can also use SFTP to plug Tahoe-LAFS into your computer's
local filesystem via ``sshfs``, but see the `FAQ about performance
problems`_.
The SftpFrontend_ page on the wiki has more information about using SFTP with
Tahoe-LAFS.
.. _SFTP: https://en.wikipedia.org/wiki/SSH_file_transfer_protocol
.. _FTP: https://en.wikipedia.org/wiki/File_Transfer_Protocol
.. _FTP-and-SFTP.rst: frontends/FTP-and-SFTP.rst
.. _FAQ about performance problems: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ#Q23_FUSE
.. _SftpFrontend: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend
@ -158,9 +154,7 @@ The WAPI
--------
Want to program your Tahoe-LAFS node to do your bidding? Easy! See
webapi.rst_.
.. _webapi.rst: frontends/webapi.rst
:doc:`frontends/webapi`.
Socialize

View File

@ -102,8 +102,7 @@ just a special way of interpreting the contents of a specific mutable file.
Earlier releases used a "vdrive server": this server was abolished in the
v0.7.0 release.
For details of how mutable files work, please see mutable.rst_ in this
directory.
For details of how mutable files work, please see :doc:`mutable`.
For releases since v0.7.0, we achieve most of our desired properties. The
integrity and availability of dirnodes is equivalent to that of regular
@ -124,15 +123,13 @@ multiple versions of each mutable file, and you might have some shares of
version 1 and other shares of version 2). In extreme cases of simultaneous
update, mutable files might suffer from non-monotonicity.
.. _mutable.rst: mutable.rst
Dirnode secret values
=====================
As mentioned before, dirnodes are simply a special way to interpret the
contents of a mutable file, so the secret keys and capability strings
described in mutable.rst_ are all the same. Each dirnode contains an RSA
described in :doc:`mutable` are all the same. Each dirnode contains an RSA
public/private keypair, and the holder of the "write capability" will be able
to retrieve the private key (as well as the AES encryption key used for the
data itself). The holder of the "read capability" will be able to obtain the

View File

@ -0,0 +1,17 @@
Specifications
==============
This section contains various attempts at writing detailed specifications of
the data formats used by Tahoe.
.. toctree::
:maxdepth: 2
outline
uri
file-encoding
URI-extension
mutable
dirnodes
servers-of-happiness
backends/raic

View File

@ -94,7 +94,7 @@ the same rate.
We've decided to make it opt-in for now: mutable files default to
SDMF format unless explicitly configured to use MDMF, either in ``tahoe.cfg``
(see `<configuration.rst>`__) or in the WUI or CLI command that created a
(see :doc:`../configuration`) or in the WUI or CLI command that created a
new mutable file.
The code can read and modify existing files of either format without user

View File

@ -85,7 +85,7 @@ representation of the size of the data represented by this URI. All base32
encodings are expressed in lower-case, with the trailing '=' signs removed.
For example, the following is a CHK URI, generated from a previous version of
the contents of architecture.rst_::
the contents of :doc:`architecture.rst<../architecture>`::
URI:CHK:ihrbeov7lbvoduupd4qblysj7a:bg5agsdt62jb34hxvxmdsbza6do64f4fg5anxxod2buttbo6udzq:3:10:28733
@ -100,8 +100,6 @@ about the file's contents (except filesize), which improves privacy. The
URI:CHK: prefix really indicates that an immutable file is in use, without
saying anything about how the key was derived.
.. _architecture.rst: ../architecture.rst
LIT URIs
--------
@ -146,7 +144,7 @@ The format of the write-cap for mutable files is::
Where (writekey) is the base32 encoding of the 16-byte AES encryption key
that is used to encrypt the RSA private key, and (fingerprint) is the base32
encoded 32-byte SHA-256 hash of the RSA public key. For more details about
the way these keys are used, please see mutable.rst_.
the way these keys are used, please see :doc:`mutable`.
The format for mutable read-caps is::
@ -162,8 +160,6 @@ Historical note: the "SSK" prefix is a perhaps-inaccurate reference to
"Sub-Space Keys" from the Freenet project, which uses a vaguely similar
structure to provide mutable file access.
.. _mutable.rst: mutable.rst
Directory URIs
==============
@ -171,7 +167,7 @@ Directory URIs
The grid layer provides a mapping from URI to data. To turn this into a graph
of directories and files, the "vdrive" layer (which sits on top of the grid
layer) needs to keep track of "directory nodes", or "dirnodes" for short.
dirnodes.rst_ describes how these work.
:doc:`dirnodes` describes how these work.
Dirnodes are contained inside mutable files, and are thus simply a particular
way to interpret the contents of these files. As a result, a directory
@ -187,8 +183,6 @@ directory) look much like mutable-file read-caps::
Historical note: the "DIR2" prefix is used because the non-distributed
dirnodes in earlier Tahoe releases had already claimed the "DIR" prefix.
.. _dirnodes.rst: dirnodes.rst
Internal Usage of URIs
======================

View File

@ -275,7 +275,7 @@ to the gatherer and offer it a second FURL which points back to the node's
The initial connection is flipped to allow the nodes to live behind NAT
boxes, as long as the stats-gatherer has a reachable IP address.)
.. _Foolscap: http://foolscap.lothar.com/trac
.. _Foolscap: https://foolscap.lothar.com/trac
The stats-gatherer is created in the same fashion as regular tahoe client
nodes and introducer nodes. Choose a base directory for the gatherer to live
@ -307,7 +307,7 @@ keep its FURL consistent). To explicitly control which port it uses, write
the desired portnumber into a file named "portnum" (i.e. $BASEDIR/portnum),
and the next time the gatherer is started, it will start listening on the
given port. The portnum file is actually a "strports specification string",
as described in configuration.rst_.
as described in :doc:`configuration`.
Once running, the stats gatherer will create a standard python "pickle" file
in $BASEDIR/stats.pickle . Once a minute, the gatherer will pull stats
@ -324,8 +324,6 @@ something useful. For example, a tool could sum the
total-disk-available number for the entire grid (however, the "disk watcher"
daemon, in misc/operations_helpers/spacetime/, is better suited for this specific task).
.. _configuration.rst: configuration.rst
Using Munin To Graph Stats Values
=================================

View File

@ -56,7 +56,7 @@ of ``venv\Scripts\tahoe``, you can either "`activate`_" the virtualenv (by
running ``venv\Scripts\activate``, or you can add the Scripts directory to
your ``%PATH%`` environment variable.
Now use the docs in `<running.rst>`_ to learn how to configure your first
Now use the docs in :doc:`running` to learn how to configure your first
Tahoe node.
.. _activate: https://virtualenv.pypa.io/en/latest/userguide.html#activate-script