'serverid' is the pubkey (for V2 clients), falling back to the tubid (for V1
clients). This also required cleaning up the way the index is created for the
old V1 introducer.
This significantly cleans up the IntroducerServer web-status renderers.
Instead of poking around in the introducer's internals, now the web-status
renderers get clean AnnouncementDescriptor and SubscriberDescriptor
objects. They are still somewhat foolscap-centric, but will provide a clean
abstraction boundary for future improvements.
The specific #1721 bug was that old (V1) subscribers were handled by
wrapping their RemoteReference in a special WrapV1SubscriberInV2Interface
object, but the web-status display was trying to peek inside the object to
learn what host+port it was associated with, and the wrapper did not proxy
those extra attributes.
A test was added to test_introducer to make sure the introweb page renders
properly and at least contains the nicknames of both the V1 and V2 clients.
This was a premature feature addition to the mock filenode, and gets in the
way of the IServer refactoring I'm trying to do. Best to remove it now and
re-introduce it in a better form later when it's actually needed.
This avoids the name collision between the actual results
objects (defined in allmydata.check_results) and the code that renders
these objects into HTML (defined in allmydata.web.check_results). Only
the web-side objects were renamed.
This fixes bug #1689. Repair was using MODE_READ to build the servermap,
which doesn't try hard enough to grab the privkey, and also doesn't guarantee
sending queries to all servers. This patch adds a new MODE_REPAIR which does
both, and does a separate, distinct mapupdate to start wth repair cycle,
instead of relying upon the (MODE_CHECK) mapupdate leftover from the
filecheck that triggered the repair.
SystemTest has a couple of different phases, separated by a poller which
waits for everything to be idle (all messages delivered, none in flight). It
does this by watching some internal "_debug_outstanding" counters in the
server and in each client, and waiting for them to hit zero.
Just before the last phase, we replace the server with a new one (to make
sure clients re-send their messages properly). Unfortunately, the polling
function closed over the variable holding the original server, and didn't see
the replacement. It kept polling the old server, and failed to notice the
outstanding messages for the new server. The last phase of the test (check3)
was started too early, which failed (since some messages had not yet been
delivered), and then exploded in a flurry of dirty-reactor errors (because
some messages were delivered after test shutdown).
This replaces the closed-over-variable with a "self.the_introducer", which
seems to fix the race.
One additional place to look at in the future: the client
announcement-receive path (remote_announce) uses an eventually(). If the
message has been received and the eventual-send posted (but not yet executed)
when the poller sees it, the poller might erroneously conclude that the
client is idle and cause the same problem as above. To fix this, the poller
(probably all pollers) could be enhanced to do a flushEventualQueue before
querying the are-we-done-yet predicate function.
This still leaves immutable-publish results incorrectly using tubids instead
of serverids. That will need some more work, since it might change the Helper
interface.
This introduces new client and server halves to the Introducer (renaming the
old one with a _V1 suffix). Both have fallbacks to accomodate talking to a
different version: the publishing client switches on whether the server's
.get_version() advertises V2 support, the server switches on which
subscription method was invoked by the subscribing client.
The V2 protocol sends a three-tuple of (serialized announcement dictionary,
signature, pubkey) for each announcement. The V2 server dispatches messages
to subscribers according to the service-name, and throws errors for invalid
signatures, but does not otherwise examine the messages. The V2 receiver's
subscription callback will receive a (serverid, ann_dict) pair. The
'serverid' will be equal to the pubkey if all of the following are true:
the originating client is V2, and was told a privkey to use
the announcement went through a V2 server
the signature is valid
If not, 'serverid' will be equal to the tubid portion of the announced FURL,
as was the case for V1 receivers.
Servers will create a keypair if one does not exist yet, stored in
private/server.privkey .
The signed announcement dictionary puts the server FURL in a key named
"anonymous-storage-FURL", which anticipates upcoming Accounting-related
changes in the server advertisements. It also provides a key named
"permutation-seed-base32" to tell clients what permutation seed to use. This
is computed at startup, using tubid if there are existing shares, otherwise
the pubkey, to retain share-order compatibility for existing servers.
The "#section" declaration (which matches id="section") should have been
".section" (which matches class="section").
The welcome page has a feature that I actually liked: the little "This
Client" sidebar sits just to the right of the start of the Controls block.
Fixing .section broke that (the clear:both introduces a gap, forcing the
Controls block to start strictly below the bottom of the This Client block).
So I also removed class="section" from the Controls block to allow them to
share the horizontal space again.
The removed assertions are appropriate for a download that seeks to
return plaintext to a caller; if we don't have at least k active remote
shares, then we can't hope to do that. They're not appropriate for a
verification operation; a user can try to verify a file that has fewer
than k shares available, so that shouldn't be treated as an error.
Instead, we proceed with fewer than k shares, and ensure that we
terminate the download if we have no shares at all and we're verifying.
test_verify_mdmf_all_bad_sharedata tests for the regression described
in ticket 1648. In particular, it will trigger the misplaced assertion
in the share activation code. It also tests to make sure that
verification continues with fewer than k shares.
* replace DeferredList with gatherResults, simplify result handling
* use BadShareError to signal recoverable problems in either fetch or
validate, catch after _validate_block
* _validate_block is thus not responsible for noticing fetch problems
* rename _validation_or_decoding_failed() to _handle_bad_share()
* _get_needed_hashes() returns two Deferreds, instead of a hard-to-unpack
DeferredList