This re-factors the magic-folder tests to abstract
the whole "do a file operation" so we can properly
send fake (or wait for real) inotify events to the
uploader/downloader. This speeds up the tests quite
a bit and makes test_alice_bob reasonable again (at
about 1.5s instead of over 30s).
We no longer need the complexity of choosing the application name at
runtime. This removes the setup.py code which populates the _appname.py
file, and the code in __init__.py which reads it. It does not yet remove
the tests which compare the output of e.g. `tahoe --version` against
`allmydata.__appname__`, which I think could be removed, but that's more
invasive than I want to do right now.
closes ticket:2754
This doesn't reveal very much information, but does tell
you if magic-folder is currently working and if not it will
indicate when the last attempt to do a remote scan was.
Improve error-handling for directories if you ask for JSON from
the /uri endpoint, but an error occurs (you get a proper HTTP
status code and a valid JSON object).
For 'tahoe magic-folder status' e now retrieve *all* the remote data
required in the CLI before doing anything else so that errors can be
shown immediately. Use the improved JSON endpoints to print better
errors.
This keeps re-trying the initial magic-folder scan and alerts
the user (via logs only :/) until it succeeds at least once.
After this happens and the node has started up, it will continue
to re-try if enough storage servers go away later such that the
remote collection can't be retrieved.
Previously, this file importing "allmydata.immutable" but assuming that
"allmydata.immutable.upload" was available, which only worked if some
other file had imported upload.py . This didn't affect running the
entire test suite (something imported upload.py before anything else
needed it), but caused errors when running specific tests like
test_repairer.py .
I can't currently test this (my OS-X laptop can't run those tests), but
based on how much time test_magic_folder takes on the buildbots, I
expect oneshare=True to help considerably.
This saves more time (as measured on my laptop):
* test_sftp: 17.7s -> 13s
* test_dirnode: 26.5s -> 20s
* test_ftp, test_configutil, test_web show negligible speedups
As before, some tests care about the number of shares, generally ones
which delete or corrupt shares and then expect to see the errors get
noticed or fixed. Those tests continue to use k=3/N=10.
Most of the CLI tests don't care about the actual shares. Configuring
the test client to use k=N=1 reduces the runtime from 180s to 90s on my
laptop.
A few tests *do* care, like test_check (which delete some shares, then
assert that 'tahoe check' shows the damage). These still use k=3/N=10.
Many of the test cases would exercise two copies of each file: one with
k=3/N=10, and a second with k=127/N=255 (255 being the maximum supported
by zfec).
Large number of shares increases the overhead of the testing apparatus,
which is pushing those shares to lots of local servers.
I don't think the "max_shares" case is necessary, and it takes forever.
Because of it, "mutable.Update" was consuming 15% of the total test
runtime, and a third of that was just a single
function (test_replace_locations_max_shares, now deleted). On a
Raspberry Pi 3 (our "slow computer" benchmark), including branch
coverage, this one class took 42 minutes to complete, and requires
disabling a bunch of timeouts to finish at all.
The total number of shares in a file ("N") affects one thing: the
width (and thus height) of the share hash tree. This should be exercised
in test_hashtree.
The number of required shares ("k") affects one thing: the segment size
must be a multiple of k. I don't think we need to exercise this, but if
so, it could be exercised by a few small values for k, rather than 127.
Removing the max_shares cases saves 82% of the mutable.update
runtime (on top of the previous three-segment fix), reducing it from 64s
to 11.3s on my laptop.