Squashed all commits that were meejah's between
30d68fb499f300a393fa0ced5980229f4bb6efda
and
33c268ed3a8c63a809f4403e307ecc13d848b1ab
On the branch meejah:1382.markberger-rewrite-rebase.6 as
per review
* replace "last_details" with "non_connected_statuses" dict
* rename "last_connection_summary" to just "summary"
* for connected servers, show other hints in a tooltip
* for not-yet-connected servers, show all hints in a list
* build the list (in STAN) on the server side, not using IContainer
This makes IServer instances responsible for their own network
connections, which will help when we add HTTP-based servers in the
future. The StorageFarmBroker should not care about how the IServer uses
the network, it just provides the announcement (and local config).
As discussed at https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1973 and in
previous pull request #129.
- replace lengthy timestamps with human-readable deltas (eg 1h 2m 3s)
- replace "announced" column with "Last RX" column
- remove service column (it always said the same thing, "storage")
- fix colspan on 'You are not presently connected' message
Previous versions, some with github comments: 3fe9053134 , 486dbfc7bd , and c89ea62580, 9fabb92486, bbd8b42a25
Unlike previous attempts, the tests on this one should pass in any timezone.
(But like current master, will fail with Nevow >=0.12...)
Thanks to an anonymous contributor who wrote some of the tests.
As discussed at https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1973 and in
previous pull request #129.
- replace lengthy timestamps with human-readable deltas (eg 1h 2m 3s)
- replace "announced" column with "Last RX" column
- remove service column (it always said the same thing, "storage")
- fix colspan on 'You are not presently connected' message
Previous versions, some with github comments: 3fe9053134 , 486dbfc7bd , and c89ea62580, 9fabb92486, bbd8b42a25
Unlike previous attempts, the tests on this one should pass in any timezone.
(But like current master, will fail with Nevow >=0.12...)
Thanks to an anonymous contributor who wrote some of the tests.
DeepResultsBase also has a get_corrupt_shares(), and it is populated
from CheckResults.get_corrupt_shares(). It has been updated too, along
with get_remaining_corrupt_shares().
Remove temporary get_new_corrupt_shares() and
get_new_incompatible_shares().
IDisplayableServer includes just enough functionality to call
.get_name() and friends, which is all that the UploadResults really
need. IServer is a superset that includes actual share-manipulation
methods. StubServer instances provide only IDisplayableServer, while
actual NativeStorageServer instances provide the full IServer interface.
When the Helper sends a serverid (so we know what to call the server but
nothing else about it, and have no corresponding NativeStorageServer
object to reference), but we want to store an IDisplayableServer in the
UploadResults, we create a synthetic StubServer "server" and store that
instead.
Complete the getter-based transformation, by hiding ".uri" and updating
callers to use get_uri(). Also don't set a dummy self._uri, leave it
undefined until someone calls set_uri().
This hides attributes with e.g. _sharemap, and creates getters like
get_sharemap() to access them, for every field except .uri . This will
make it easier to modify the internal representation of .sharemap
without requiring callers to adjust quite yet.
".uri" has so many users that it seemed better to update it in a
subsequent patch.
This measured how long the Helper took to do a filecheck before asking
for ciphertext. The "Contacting Helper" report includes both
existence_check and the client-helper RTT.
For non-overlapping uploads, it was being returned correctly. But when
multiple upload requests overlapped, and the file was not already in the
grid, the filecheck would only run once, and its existence_check time
would be reported for all uploaders (even if they didn't have to wait
for that time). Cleaning that up proved too difficult: the only correct
place to report this time is from the initial remote_upload_chk() call,
but the return value of that is too constrained to accomodate it in the
needs-upload case.
So I'm removing it altogether. Eventually I plan to add a proper
events/times field and record more data, including this check, in a form
that can be drawn on a nice zoomable timeline view.
Old clients talking to a new Helper (which doesn't supply the value)
will tolerate the loss (they'll just display an empty field on the web
view).
This still leaves immutable-publish results incorrectly using tubids instead
of serverids. That will need some more work, since it might change the Helper
interface.
This introduces new client and server halves to the Introducer (renaming the
old one with a _V1 suffix). Both have fallbacks to accomodate talking to a
different version: the publishing client switches on whether the server's
.get_version() advertises V2 support, the server switches on which
subscription method was invoked by the subscribing client.
The V2 protocol sends a three-tuple of (serialized announcement dictionary,
signature, pubkey) for each announcement. The V2 server dispatches messages
to subscribers according to the service-name, and throws errors for invalid
signatures, but does not otherwise examine the messages. The V2 receiver's
subscription callback will receive a (serverid, ann_dict) pair. The
'serverid' will be equal to the pubkey if all of the following are true:
the originating client is V2, and was told a privkey to use
the announcement went through a V2 server
the signature is valid
If not, 'serverid' will be equal to the tubid portion of the announced FURL,
as was the case for V1 receivers.
Servers will create a keypair if one does not exist yet, stored in
private/server.privkey .
The signed announcement dictionary puts the server FURL in a key named
"anonymous-storage-FURL", which anticipates upcoming Accounting-related
changes in the server advertisements. It also provides a key named
"permutation-seed-base32" to tell clients what permutation seed to use. This
is computed at startup, using tubid if there are existing shares, otherwise
the pubkey, to retain share-order compatibility for existing servers.
* storage server ignores requests to extend shares by sending a new_length
* storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
* storage server zeroes out lease info at the old location when moving it to a new location
ref. #1528
We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
fixes#1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
interfaces.py: modified the return type of RIStatsProvider.get_stats to allow for None as a return value
NEWS.rst, stats.py: documentation of change to get_latencies
stats.rst: now documents percentile modification in get_latencies
test_storage.py: test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
fixes#1392