The removed assertions are appropriate for a download that seeks to
return plaintext to a caller; if we don't have at least k active remote
shares, then we can't hope to do that. They're not appropriate for a
verification operation; a user can try to verify a file that has fewer
than k shares available, so that shouldn't be treated as an error.
Instead, we proceed with fewer than k shares, and ensure that we
terminate the download if we have no shares at all and we're verifying.
test_verify_mdmf_all_bad_sharedata tests for the regression described
in ticket 1648. In particular, it will trigger the misplaced assertion
in the share activation code. It also tests to make sure that
verification continues with fewer than k shares.
This uses explicitly enumerated packages= and package_data= arguments to
setup(), rather than relying upon the convenient (but darcs-specific)
functions which would determine these values by asking the revision-control
system.
Note that darcsver is still used, when building from a darcs tree.
* replace DeferredList with gatherResults, simplify result handling
* use BadShareError to signal recoverable problems in either fetch or
validate, catch after _validate_block
* _validate_block is thus not responsible for noticing fetch problems
* rename _validation_or_decoding_failed() to _handle_bad_share()
* _get_needed_hashes() returns two Deferreds, instead of a hard-to-unpack
DeferredList
The remaining work is to write additional tests.
src/allmydata/test/no_network.py:
This supports tests in which servers leave the grid only to return with
their shares intact at a later time.
src/allmydata/test/test_mutable.py:
The UCWEs in the incident reports associated with #1628 all seem to be
associated with shares that the servermap knows about, but which aren't
accounted for during the publish process for whatever reason. Specifically,
it looks like the publisher is only capable of keeping track of a single
storage server for a given share. This makes the repair process worse than
it was pre-MDMF at updating all of the shares of a particular file to the
newest version, and can also cause spurious UCWEs. This test simulates such
a layout and fails if an UCWE is thrown. We need to write another test to
ensure that all copies of a share are updated to the latest version (or
alter this test to do that), so that the test suite doesn't pass unless both
regressions are fixed.
We want the publisher to follow the existing share placement when uploading
a new version of a mutable file, and we don't want this test to pass unless
it does.
src/allmydata/mutable/publish.py:
Before this commit, the publisher only kept track of a single writer for
each share. This is insufficient to handle updates in which a single share
may live on multiple servers. In the best case, an update will only update
one of the existing shares instead of all of them. In some cases, the update
will encounter the existing shares when publishing some other share,
interpret it as a sign of an uncoordinated update, and fail. Keeping track
of all of the writers helps ensure that all existing shares are updated, and
helps avoid spurious uncoordinated write errors.
embedded URIs, although documented here:
http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#embedded-uris
generate messages like this from rst2html --verbose:
quickstart.rst:3: (INFO/1) Duplicate explicit target name: "the tahoe-dev mailing list".
Also this patch prepends a "utf-8 BOM" to the beginning of the file.
allmydata.__version__ can just be a string, it doesn't need to be an instance
of some fancy NormalizedVersion class. Everything inside Tahoe uses
str(__version__) anyways.
Also add .dev0 when a git tree is dirty.
Closes#1466