Kevan Carstensen e29323f68f mutable publish: track multiple servers-per-share. Fixes some of #1628.
The remaining work is to write additional tests.

src/allmydata/test/no_network.py:

 This supports tests in which servers leave the grid only to return with
 their shares intact at a later time.

src/allmydata/test/test_mutable.py:

 The UCWEs in the incident reports associated with #1628 all seem to be
 associated with shares that the servermap knows about, but which aren't
 accounted for during the publish process for whatever reason. Specifically,
 it looks like the publisher is only capable of keeping track of a single
 storage server for a given share. This makes the repair process worse than
 it was pre-MDMF at updating all of the shares of a particular file to the
 newest version, and can also cause spurious UCWEs. This test simulates such
 a layout and fails if an UCWE is thrown. We need to write another test to
 ensure that all copies of a share are updated to the latest version (or
 alter this test to do that), so that the test suite doesn't pass unless both
 regressions are fixed.

 We want the publisher to follow the existing share placement when uploading
 a new version of a mutable file, and we don't want this test to pass unless
 it does.

src/allmydata/mutable/publish.py:

 Before this commit, the publisher only kept track of a single writer for
 each share. This is insufficient to handle updates in which a single share
 may live on multiple servers. In the best case, an update will only update
 one of the existing shares instead of all of them. In some cases, the update
 will encounter the existing shares when publishing some other share,
 interpret it as a sign of an uncoordinated update, and fail. Keeping track
 of all of the writers helps ensure that all existing shares are updated, and
 helps avoid spurious uncoordinated write errors.
2011-12-27 21:33:58 -08:00
2011-10-31 05:22:52 +00:00
2011-10-31 05:22:52 +00:00
2011-11-01 00:19:35 -07:00

==========
Tahoe-LAFS
==========

Tahoe-LAFS is a Free Software/Open Source decentralized data store. It
distributes your filesystem across multiple servers, and even if some of the
servers fail or are taken over by an attacker, the entire filesystem continues
to work correctly and to preserve your privacy and security.

To get started please see `quickstart.rst`_ in the docs directory.

LICENCE
=======

You may use this package under the GNU General Public License, version 2 or, at
your option, any later version.  You may use this package under the Transitive
Grace Period Public Licence, version 1.0, or at your option, any later
version. (You may choose to use this package under the terms of either licence,
at your option.)  See the file `COPYING.GPL`_ for the terms of the GNU General
Public License, version 2.  See the file `COPYING.TGPPL.rst`_ for the terms of
the Transitive Grace Period Public Licence, version 1.0.

See `TGPPL.PDF`_ for why the TGPPL exists, graphically illustrated on three slides.

.. _quickstart.rst: https://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/quickstart.rst
.. _COPYING.GPL: https://tahoe-lafs.org/trac/tahoe-lafs/browser/COPYING.GPL
.. _COPYING.TGPPL.rst: https://tahoe-lafs.org/trac/tahoe-lafs/browser/COPYING.TGPPL.rst
.. _TGPPL.PDF: https://tahoe-lafs.org/~zooko/tgppl.pdf
Description
The Tahoe-LAFS decentralized secure filesystem.
Readme 91 MiB
Languages
Python 98.1%
HTML 0.9%
Nix 0.3%
Shell 0.3%
Makefile 0.2%
Other 0.1%