mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-02-21 18:06:46 +00:00
immutable/downloader/fetcher.py: fix diversity bug in server-response handling
When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the _shares_from_server dict was being popped incorrectly (using shnum as the index instead of serverid). I'm still thinking through the consequences of this bug. It was probably benign and really hard to detect. I think it would cause us to incorrectly believe that we're pulling too many shares from a server, and thus prefer a different server rather than asking for a second share from the first server. The diversity code is intended to spread out the number of shares simultaneously being requested from each server, but with this bug, it might be spreading out the total number of shares requested at all, not just simultaneously. (note that SegmentFetcher is scoped to a single segment, so the effect doesn't last very long).
This commit is contained in:
parent
9ae026d9f4
commit
a4068dd1e0
@ -236,7 +236,7 @@ class SegmentFetcher:
|
||||
# from all our tracking lists.
|
||||
if state in (COMPLETE, CORRUPT, DEAD, BADSEGNUM):
|
||||
self._share_observers.pop(share, None)
|
||||
self._shares_from_server.discard(shnum, share)
|
||||
self._shares_from_server.discard(share._server.get_serverid(), share)
|
||||
if self._active_share_map.get(shnum) is share:
|
||||
del self._active_share_map[shnum]
|
||||
self._overdue_share_map.discard(shnum, share)
|
||||
|
Loading…
x
Reference in New Issue
Block a user