Change the Node/Client startup process to set the Tub location during
`__init__`, rather than after the reactor starts up. This simplifies the
init process considerably, because components (like StorageServer and
IntroducerClient) can do their tub.registerReferences() immediately.
This addresses most of the runtime changes called for in ticket:2491,
but not the node-creation changes.
This can be done synchronously because we now know the port number
earlier. This still uses get_local_addresses_sync() (not _async) to do
automatic IP-address detection if the config file didn't set
tub.location or used the special word "AUTO" in it.
The new implementation slightly changes the mapping from tub.location to
the assigned location string. The old code removed all instances of
"AUTO" from the location and then extended the hints with the local
ones (so "hint1:AUTO:hint2" turns into "hint1:hint2:auto1:auto2"). The
new code exactly replaces each "AUTO" with the local hints (so that
example turns into "hint1:auto1:auto2:hint2", and a silly
"hint1:AUTO:AUTO" would turn into "hint1:auto1:auto2:auto1:auto2"). This
is unlikely to affect anybody.
This is the first step towards making node startup be synchronous: the
tub.port is entirely determined (including any TCP port allocation that
might be necessary) before creating the Tub, so the portnumber part of
FURLs can be determined earlier.
This test was depending upon the storage announcement happening *after*
startup, but the upcoming synchronous-Tub-startup change will modify the
ordering. Fix it in both cases by disabling storage in the client being
tested.
This has worked so far because everything waited for the Tub to be
ready. We'll soon be making Tub setup synchronous, so we won't have to
wait anymore, so the order will matter.
This reverts commit bb7184163e.
We changed test_runner.BinTahoe.run_bintahoe since this commit landed:
the new version can no longer cause the test to be skipped late (we've
gotten rid of the bin/tahoe script entirely, so it's no longer possible
for us to miss it). Hence I think we don't need this unsightly stall any
longer.
This makes it possible for automated-upload tools (like drop-upload and
magic-folders) to be told when there are "enough" servers connected for
uploads to be successful. This should help prevent the pathological case
where the tools attempt to upload files immediately after node
startup (or before the user turns on their wifi connection), and the
node stores all the shares on itself.
This new notification is single-shot and edge-triggered: when it fires,
you know that, at some point in the past, the node *was* connected to at
least $threshold servers. However you might have lost several
connections since then. The user might turn off wifi after this fires,
causing all connections to be dropped.
In the long run, this API will change: clients will receive continuous
notifications about servers coming and going, and tools like
magic-folder should refrain from uploading during periods of
insufficient connection. It might be as simple as checking the size of
the connected server list when a periodic timer goes off, or something
more responsive like an edge-triggered "upload as soon as you can"
observer.
Meejah pointed out that new users might think the encoding parameters
are fixed, something you must pick correctly when you first set up the
node, and then are never allowed to change again, which is kind of
anxiety-inducing. This updates the comment to explain that the encoding
is stored in each filecap, and the tahoe.cfg values are only used for
newly-uploaded files.
Re-indent the blocks for consistency, improve the explanation of
?filename=foo.jpg to match it's new location, use new-style reference
for urls-and-utf8 footnote.
• mark "/file/" as a synonym for "/named/" to be deprecated (fixes#1903)
• move the options common to all three forms to the bottom and dedent them
• name the protocol/format as "LAFS" and the implementation/client "Tahoe"
• reflow (with fill-column 77)
I set up a raspberry pi buildslave (which, on the "raspbian jesse"
image, uses a 32-bit python, and perhaps a 32-bit kernel too). It fails
test_util.TimeFormat.test_format_time_y2038 with a ValueError inside the
call to time.gmtime(). The test was looking for the equality check to
fail instead. I think catching ValueError is the more-correct way to
detect a system with a 32-bit time type.
One of the buildslaves (Ubuntu wily 15.10) has a very old pip-1.5.6,
which doesn't know how to "pip install" a filepath+extra (like
".[test]") unless --editable is also used.
It's convenient to have --editable set anyways (so you can do subsequent
narrow testing without re-running tox, by running ".tox/py27/bin/trial
TESTCASE" or use .tox/py27/bin/activate), so changing the dependency
from ".[test]" to "--editable=.[test]" is the easiest way to work around
that older buildslave. (I could also have upgraded the buildslave to use
a newer pip, but 15.10 is pretty recent and other people will probably
hit this too, so this way it's fixed for everybody).
refs ticket:2776
With the new Foolscap-0.11.0 (which changed the way connections are
established), I'm seeing DirtyReactorErrors getting thrown by
allmydata.test.test_system.SystemTest.test_filesystem_with_cli_in_subprocess
, on a host that has three IP addresses (one is 127.0.0.1, two is wifi,
three is a VPN). The test itself is getting skipped because bin/tahoe
isn't in the expected place, but by that point, the nodes have already
been launched and have established connections over one of the three
hints (probably 127.0.0.1). The test terminates so quickly that the
connections to the other two addresses have not finished being
abandoned. The extra stall seems to give Foolscap enough time to reap
the cancelled connections and makes the DRT go away.
I think an offline test, or maybe one with a single external IP address,
wouldn't hit this case.
Arbitrary stalls are never very satisfactory, of course. Usually there
is some threshold delay value, below which it fails reliably, above
which it works on my own machine (for now). This one is weird: the
threshold seems to be below the resolution of the system clock. Stalling
for one nanosecond was enough to fix the problem, but using a simple
fireEventually() didn't work.
This little-used debugging feature allowed you to SSH or Telnet "into" a
Tahoe node, and get an interactive Read-Eval-Print-Loop (REPL) that
executed inside the context of the running process. The SSH
authentication code used a deprecated feature of Twisted, this code had
no unit-test coverage, and I haven't personally used it in at least 6
years (despite writing it in the first place). Time to go.
Also experiment with a Twisted-style "topfiles/" directory of NEWS
fragments. The idea is that we require all user-visible changes to
include a file or two (named as $TICKETNUM.$TYPE), and then run a script
to generate NEWS during the release process, instead of having a human
scan the commit logs and summarize the changes long after they landed.
Closes ticket:2367
Note that Twisted doesn't do anything like Versioneer, so this will
generally show e.g. "Twisted-16.1.0" for the entire interval between
16.1.0 and 16.2.0.