A minimally-defined static server only specifies server_id,
anonymous-storage-FURL, and permutation-seed-base32. But the WUI Welcome
page wouldn't render (it raised an exception) without also defining
nickname and version. This allows those values to be missing.
This follows the latest comments in ticket:2788, moving the static
server definitions from "connections.yaml" to "servers.yaml". It removes
the "connections" and "introducers" blocks from that file, leaving it
responsible for just static servers (I think connections and introducers
can be configured from tahoe.cfg).
This feeds all the static server specs to the StorageFarmBroker in a
single call, rather than delivering them as simulated introducer
announcements. It cleans up the way handlers are specified too (the
handler dictionary is ignored, but that will change soon).
They're the same thing, but knowing that is the responsibility of the
caller, not NativeStorageServer. Try to normalize on "server_id" as the
spelling. Remove support for missing key_s, now that we require V2
introductions.
This is a change I've wanted to make for many years, because when we get
to HTTP-based servers, we won't have tubids for them. What held me back
was that there's code all over the place that uses the serverid for
various purposes, so I wasn't sure it was safe. I did a big push a few
years ago to use IServer instances instead of serverids in most
places (in #1363), and to split out the values that actually depend upon
tubid into separate accessors (like get_lease_seed and
get_foolscap_write_enabler_seed), which I think took care of all the
important uses.
There are a number of places that use get_serverid() as dictionary key
to track shares (Checker results, mutable servermap). I believe these
are happy to use pubkeys instead of tubids: the only thing they do with
get_serverid() is to compare it to other values obtained from
get_serverid(). A few places in the WUI used serverid to compute display
values: these were fixed.
The main trouble was the Helper: it returns a HelperUploadResults (a
Copyable) with a share->server mapping that's keyed by whatever the
Helper's get_serverid() returns. If the uploader and the helper are on
different sides of this change, the Helper could return values that the
uploader won't recognize. This is cosmetic: that mapping is only used to
display the upload results on the "Recent and Active Operations" page.
I've added code to StorageFarmBroker.get_stub_server() to fall back to
tubids when looking up a server, so this should still work correctly
when the uploader is new and the Helper is old. If the Helper is new and
the uploader is old, the upload results will show unusual server ids.
refs ticket:1363
This re-factors the magic-folder tests to abstract
the whole "do a file operation" so we can properly
send fake (or wait for real) inotify events to the
uploader/downloader. This speeds up the tests quite
a bit and makes test_alice_bob reasonable again (at
about 1.5s instead of over 30s).
We no longer need the complexity of choosing the application name at
runtime. This removes the setup.py code which populates the _appname.py
file, and the code in __init__.py which reads it. It does not yet remove
the tests which compare the output of e.g. `tahoe --version` against
`allmydata.__appname__`, which I think could be removed, but that's more
invasive than I want to do right now.
closes ticket:2754
This doesn't reveal very much information, but does tell
you if magic-folder is currently working and if not it will
indicate when the last attempt to do a remote scan was.
Improve error-handling for directories if you ask for JSON from
the /uri endpoint, but an error occurs (you get a proper HTTP
status code and a valid JSON object).
For 'tahoe magic-folder status' e now retrieve *all* the remote data
required in the CLI before doing anything else so that errors can be
shown immediately. Use the improved JSON endpoints to print better
errors.
Previously, this file importing "allmydata.immutable" but assuming that
"allmydata.immutable.upload" was available, which only worked if some
other file had imported upload.py . This didn't affect running the
entire test suite (something imported upload.py before anything else
needed it), but caused errors when running specific tests like
test_repairer.py .
I can't currently test this (my OS-X laptop can't run those tests), but
based on how much time test_magic_folder takes on the buildbots, I
expect oneshare=True to help considerably.
This saves more time (as measured on my laptop):
* test_sftp: 17.7s -> 13s
* test_dirnode: 26.5s -> 20s
* test_ftp, test_configutil, test_web show negligible speedups
As before, some tests care about the number of shares, generally ones
which delete or corrupt shares and then expect to see the errors get
noticed or fixed. Those tests continue to use k=3/N=10.
Most of the CLI tests don't care about the actual shares. Configuring
the test client to use k=N=1 reduces the runtime from 180s to 90s on my
laptop.
A few tests *do* care, like test_check (which delete some shares, then
assert that 'tahoe check' shows the damage). These still use k=3/N=10.
Many of the test cases would exercise two copies of each file: one with
k=3/N=10, and a second with k=127/N=255 (255 being the maximum supported
by zfec).
Large number of shares increases the overhead of the testing apparatus,
which is pushing those shares to lots of local servers.
I don't think the "max_shares" case is necessary, and it takes forever.
Because of it, "mutable.Update" was consuming 15% of the total test
runtime, and a third of that was just a single
function (test_replace_locations_max_shares, now deleted). On a
Raspberry Pi 3 (our "slow computer" benchmark), including branch
coverage, this one class took 42 minutes to complete, and requires
disabling a bunch of timeouts to finish at all.
The total number of shares in a file ("N") affects one thing: the
width (and thus height) of the share hash tree. This should be exercised
in test_hashtree.
The number of required shares ("k") affects one thing: the segment size
must be a multiple of k. I don't think we need to exercise this, but if
so, it could be exercised by a few small values for k, rather than 127.
Removing the max_shares cases saves 82% of the mutable.update
runtime (on top of the previous three-segment fix), reducing it from 64s
to 11.3s on my laptop.
* uses @inlineCallbacks to turn the _lazy_tail recursion into
a "real" looking loop;
* remove the need for "immediate" vs delayed iteration of said loop;
* make it easier for the unit-tests to control the behavior of the
uploader/downloader;
* consolidates (some) setup/teardown code into the setUp and tearDown
hooks provided by unittest so unit-tests aren't doing that themselves
* re-factors some of the unit-tests to use an @inlineCallbacks style
so they're easier to follow and debug
This doesn't tackle the "how to know when our inotify events have arrived"
problem the unit-tests still have, nor does it eliminate the myriad bits
of state that get added to tests via all the MixIns.
Adds:
- a JSON endpoint
- CLI to display information
- QueuedItem + IQueuedItem for uploader/downloader
- IProgress interface + PercentProgress implementation
- progress= args to many upload/download APIs