* storage server ignores requests to extend shares by sending a new_length
* storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
* storage server zeroes out lease info at the old location when moving it to a new location
ref. #1528
Declare explicitly that we prevent this problem in the server's version dict.
fixes#1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
fixes#1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
ref. #1528
This ought to close the potential for dropped errors and hanging downloads.
Verify needs to be examined, I may have broken it, although all tests pass.
This check needs to be done with each fetch from the storage server, to
detect when someone has changed the share (i.e. our servermap goes stale).
Doing it just once at the beginning of retrieve isn't enough: a write might
occur after the first segment but before the second, etc.
_try_to_validate_prefix() was not removed: it will be used by the future
check-with-each-fetch code.
test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it
fails until this check is brought back. (the corruption it applies only
touches the prefix, not the block data, so the check-less retrieve actually
tolerates it). Don't forget to re-enable it once the check is brought back.
This is a neat trick to reduce Foolscap overhead, but the need for an
explicit flush() complicates the Retrieve path and makes it prone to
lost-progress bugs.
Also change test_mutable.FakeStorageServer to tolerate multiple reads of the
same share in a row, a limitation exposed by turning off the queue.
Note that the downloader will still fetch a segment for a zero-length
read, which is wasteful. Fixing that isn't specifically required to fix
#1512, but it should probably be fixed before 1.9.
This first step shaves 15% off the runtime: from 139s to 119s on my laptop.
It also fixes a couple of places where a Deferred was being dropped, which
would cause two tests to run in parallel and also confuse error reporting.
This consistently records all immutable uploads in the Recent Uploads And
Downloads page, regardless of code path. Previously, certain webapi upload
operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History
object and were left out.
publish:
* encrypt and encode times are cumulative, not just current-segment
retrieve:
* same for decrypt and decode times
* update "current status" to include segment number
* set status to Finished/Failed when download is complete
* set progress to 1.0 when complete
More improvements to consider:
* progress is currently 0% or 100%: should calculate how many segments are
involved (remembering retrieve can be less than the whole file) and set it
to a fraction
* "fetch" time is fuzzy: what we want is to know how much of the delay is not
our own fault, but since we do decode/decrypt work while waiting for more
shares, it's not straightforward
The old code was calculating the "extension parameters" (a list) from the
downloader hints (a dictionary) with hints.values(), which is not stable, and
would result in corrupted filecaps (with the 'k' and 'segsize' hints
occasionally swapped). The new code always uses [k,segsize].
Without this, we get a regression when modifying a mutable file that was
created with more shares (larger N) than our current tahoe.cfg . The
modification attempt creates new versions of the (0,1,..,newN-1) shares, but
leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a
assertion error in SDMFSlotWriteProxy.finish_publishing in the process).
The mixed versions that result (some shares with e.g. N=10, some with N=20,
such that both versions are recoverable) cause problems for the Publish code,
even before MDMF landed. Might be related to refs #1390 and refs #1042.