allmydata.util.log.err() either takes a Failure as the first positional
argument, or takes no positional arguments and must be invoked in an
exception handler. Fixed its signature to match both foolscap.logging.log.err
and twisted.python.log.err . Included a brief unit test.
Stop checking separately for ConnectionDone/ConnectionLost, since those have
been folded into DeadReferenceError since foolscap-0.3.1 . Write
rrefutil.trap_deadref() in terms of rrefutil.trap_and_discard() to improve
code coverage.
Verifier misses
The results (described in #819) match our expectations: it misses corruption
in unused share fields and in most container fields (which are only visible
to the storage server, not the client). 1265 bytes of a 2753 byte
share (hosting a 56-byte file with an artifically small segment size) are
unused, mostly in the unused tail of the overallocated UEB space (765 bytes),
and the allocated-but-unwritten plaintext_hash_tree (480 bytes).
Depending on the versions of external libraries such as Twisted of Foolscap,
the tahoe CLI can display deprecation warnings on stdout. The tests should
not interpret those warnings as a failure if the node is in fact correctly
started.
See http://allmydata.org/trac/tahoe/ticket/859 for an example of deprecation
warnings.
fixes#876
The bug was that a disconnected server could cause us to re-enter the initial
loop() call, sending multiple queries to a single server, provoking an
incorrect UCWE. To fix it, stall the loop() with an eventual.fireEventually()
instead of weird errors. Closes#874 and #786.
Previously, if the file had 0 shares, this would raise TypeError as it tried
to call download_version(None). If the file had some shares but fewer than
'k', it would incorrectly raise MustForceRepairError.
Added get_successful() to the IRepairResults API, to give repair() a place to
report non-code-bug problems like this.
Mutable servermap updates and the immutable checker, when run with
add_lease=True, send both the do-you-have-block and add-lease commands in
parallel, to avoid an extra round trip time. Many older servers have problems
with add-lease and raise various exceptions, which don't generally matter.
The client-side code was catching+ignoring some of them, but unrecognized
exceptions were passed through to the DYHB code, concealing the DYHB results
from the checker, making it think the server had no shares.
The fix is to separate the code paths. Both commands are sent at the same
time, but the errback path from add-lease is handled separately. Known
exceptions are ignored, the others (both unknown-remote and all-local) are
logged (log.WEIRD, which will trigger an Incident), but neither will affect
the DYHB results.
The add-lease message is sent first, and we know that the server handles them
synchronously. So when the checker is done, we can be sure that all the
add-lease messages have been retired. This makes life easier for unit tests.
web/filenode.py: also serve edge metadata when using t=json on a
DIRCAP/childname object.
tahoe_ls.py: list file objects as if we were listing one-entry directories.
Show edge metadata if we have it, which will be true when doing
'tahoe ls DIRCAP/filename' and false when doing 'tahoe ls
FILECAP'
This forbids operations that would implicitly create a directory with a
zero-length (empty string) name, like what you'd get if you did "tahoe put
local /oops/blah" (#358) or "POST /uri/CAP//?t=mkdir" (#676). The error
message is fairly friendly too.
Also added code to "tahoe put" to catch this error beforehand and suggest the
correct syntax (i.e. without the leading slash).
The webapi has been looking for an Accept header since 1.4.0, but it treats a
missing header as equal to */* (to honor RFC2616). This change finally
modifies our CLI tools to ask for "text/plain, application/octet-stream",
which seems roughly correct (we either want a plain-text traceback or error
message, or an uninterpreted chunk of binary data to save to disk). Some day
we'll figure out how JSON fits into this scheme.
I've also set up a new flappserver on source@allmydata.org to receive the
tarballs. We still need to replace the gutsy buildslave (which is where the
tarballs used to be generated+uploaded) and give it the new FURL.
I started to update this to reflect the current codebase, but then I thought (a) nobody seemed to notice that it hasn't been updated since December 2007, and (b) it will just bit-rot again, so I'm removing it.
under normal conditions, this wouldn't cause any problems, but if the shares
are really sparse (perhaps because new servers were added), then
file-modifies might stop looking too early and leave old shares in place
* remove Downloader.download_to_data/download_to_filename/download_to_filehandle
* remove download.Data/FileName/FileHandle targets
* remove filenode.download/download_to_data/download_to_filename methods
* leave Downloader.download (the whole Downloader will go away eventually)
* add util.consumer.MemoryConsumer/download_to_data, for convenience
(this is mostly used by unit tests, but it gets used by enough non-test
code to warrant putting it in allmydata.util)
* update tests
* removes about 180 lines of code. Yay negative code days!
Overall plan is to rewrite immutable/download.py and leave filenode.read() as
the sole read-side API.
* backups now share dirnodes with any previous backup, in any location,
so renames and moves are handled very efficiently
* "tahoe backup" no longer bothers reading the previous snapshot
* if you switch grids, you should delete ~/.tahoe/private/backupdb.sqlite,
to force new uploads of all files and directories