This is all minor stuff: unreachable debug code (that should be commented-out
instead of in an 'if False:' block), unnecessary 'pass' and 'global'
statements, redundantly-initialized variables. No behavior changes. Nothing
here was actually broken, it just looked suspicious to the static analysis at
https://lgtm.com/projects/g/tahoe-lafs/tahoe-lafs/alerts/?mode=list .
This should tolerate offset/size combinations that read the last byte of
the file, something which was broken before. It quits early in the case
of zero-byte reads, to simplify the resulting "which segments do I need"
logic. Probably addresses ticket:2459.
When calculating the query boundary for updates to mutable files,
instead of using servers that used to have shares, use servers we
have added to the servermap. This way the querying process won't finish
until we have finished interacting with the servers that have shares.
This fixes the race condition which sometimes caused the querying process
to finish before the updater was done talking to servers with shares.
We also change the order of setting up attributes in Retrieve so that _raise_notenoughshareserror() can be called from _setup_download.
Signed-off-by: Daira Hopwood <david-sarah@jacaranda.org>
A simple refactoring. Doesn't even require a new or updated unit test.
Author: Zooko O'Whielacronx <zooko@zooko.com>
Signed-off-by: Daira Hopwood <david-sarah@jacaranda.org>
This contains several merged patches. Individual messages follow, latest first:
* Fix a warning from check-miscaptures.
* In retrieve.py, explicitly test whether a key is in self.servermap.proxies
rather than catching KeyError.
* Added a new comment to the MDMF version of the test I removed, explaining
the removal of the SDMF version.
* Removed test_corrupt_all_block_hash_tree_late, since the entire block_hash_tree
is cached in the servermap for an SDMF file.
* Fixed several tests that require files larger than the servermap cache.
* Remove unused test_response_cache_memory_leak().
* Exercise the cache.
* Test infrastructure for counting cache misses on MDMF files.
* Removed the ResponseCache. Instead, the MDMFSlotReadProxy initialized
by ServerMap is kept around so Retrieve can access it. The ReadProxy
has a cache of the first 1000 bytes initially read from each share by
the ServerMap. We're able to satisfy a number of requests out of this
cache, so roundtrips are reduced from 84 to 60 in test_deepcheck_mdmf.
There is still some mystery about under what conditions the cache has
fewer than 1000 bytes. Also this breaks some existing unit tests that
depend on the inner behavior of ResponseCache.
* The servermap.proxies (a cache of SlotReadProxies) is now keyed
by (verinfo,serverid,shnum) rather than just (serverid,shnum)
* Minor cosmetic changes
* Added a test failure if the number of cache misses is too high.
Author: Andrew Miller <amiller@dappervision.com>
Signed-off-by: David-Sarah Hopwood <davidsarah@jacaranda.org>
This changes all code which feeds CheckResults(sharemap=) to provide
IServer instances, but CheckResults converts these to old-style
serverids during output, so downstream code doesn't have to change yet.
It adds a temporary get_new_sharemap(), which *does* return IServer
instances, so the immutable repairer can build new CheckResults from an
old one. This will go away when get_sharemap() is updated to return
IServer (and downstream code is updated too).
i.e. change set_data() to accept lots of parameters, instead of taking
a single dictionary with lots of keys. Also Convert all CheckResults
creators to use it.
This fixes bug #1689. Repair was using MODE_READ to build the servermap,
which doesn't try hard enough to grab the privkey, and also doesn't guarantee
sending queries to all servers. This patch adds a new MODE_REPAIR which does
both, and does a separate, distinct mapupdate to start wth repair cycle,
instead of relying upon the (MODE_CHECK) mapupdate leftover from the
filecheck that triggered the repair.
This still leaves immutable-publish results incorrectly using tubids instead
of serverids. That will need some more work, since it might change the Helper
interface.
The removed assertions are appropriate for a download that seeks to
return plaintext to a caller; if we don't have at least k active remote
shares, then we can't hope to do that. They're not appropriate for a
verification operation; a user can try to verify a file that has fewer
than k shares available, so that shouldn't be treated as an error.
Instead, we proceed with fewer than k shares, and ensure that we
terminate the download if we have no shares at all and we're verifying.