I'd implemented stats gathering hooks in the helper a while back.
Brian did the same without reference to my changes. This reconciles
those two changes, encompassing all the stats in both changes,
implemented through the stats_provider interface.
this also provide templates for all 10 helper graphs in the
tahoe-stats munin plugin.
adds a stats_producer for the upload helper, which provides a series of counters
to the stats gatherer, under the name 'chk_upload_helper'.
it examines both the 'incoming' directory, and the 'encoding' dir, providing
inc_count inc_size inc_size_old enc_count enc_size enc_size_old, respectively
the number of files in each dir, the total size thereof, and the aggregate
size of all files older than 48hrs
the windows (cygwin) buildslave has been failing the key generator test
it turns out that the time check on whether to refill the pool, and the
reactor, are interacting such that when the maybe_refill_pool call posted
on the reactor fires, the test on whether to fill the pool fails.
this adds a loop in the failure case to retry each 1s until it is time
to refill the pool, thus mitigating this timing accuracy problem on
windows.
the RIStatsProvider interface requires that counter and stat values be
ChoiceOf(float, int, long) the recent changes to storage server to not
track 'consumed' led to returning None as the value of a counter.
this causes violations to be experienced by nodes whose stats are being
gathered.
this patch simply omits that stat if 'consumed' is not being tracked.
one of the storage servers is throwing foolscap violations about the
return value of get_stats(). this adds a log of the data returned
to the foolscap log event stream at the debug level '12' (between
NOISY(10) and OPERATIONAL(20)) hopefully this will facilitate
finding the cause of this problem.
the timeouts on uses of 'poll' were there purely to make sure a test doesn't
poll indefinitely. however having such timeouts makes tests susceptible
to premature timeouts under high load, or on slow machines. (e.g. cygwin
slaves running in virtual machines on loaded hosts)
purportedly trial by default applies a timeout to tests to prevent them
hanging out indefinitely, so these poll timeouts are redundant and cause
intermittent failures on slow hosts. hence they're more bother than they're
worth, and should be culled.
previously there was an edge case in the timing of expected behaviour
of the key_generator (w.r.t. the refresh delay and twisted/foolscap
delivery). if it took >6s for a key to be generated, then it was
possible for the pool refresh delay to transpire _during_ the
synchronous creation of a key in remote_get_rsa_key_pair. this could
lead to the timer elapsing during key creation and hence the pool
being refilled before control returned to the client.
this change ensures that the time window from a get key request
until the key gen reactor blocks to refill the pool is the time
since a request was answered, not since a request was asked.
this causes the behaviour to match expectations, as embodied in
test_keygen, even if the delay window is dropped to 0.1s
in both these cases, the timeout only serves to abort a stuck test, and
the key_generator should respond more quickly, but seeing test failures
in buildbot on some platforms suggests that the test is too susceptible
to timing issues on loaded buildslaves.
this cleans up KeyGenerator to be a service (a subservice of the
KeyGeneratorService as instantiated by the key-generator.tac app)
this means that the timer which replenishes the keypool will be
shutdown cleanly when the service is stopped.
adds checks on the key_generator service and client into the system
test 'test_mutable' such that one of the nodes (clients[3]) uses
the key_generator service, and checks that mutable file creation
in that node, via a variety of means, are all consuming keys from
the key_generator.
this adds a new service to pre-generate RSA key pairs. This allows
the expensive (i.e. slow) key generation to be placed into a process
outside the node, so that the node's reactor will not block when it
needs a key pair, but instead can retrieve them from a pool of already
generated key pairs in the key-generator service.
it adds a tahoe create-key-generator command which initialises an
empty dir with a tahoe-key-generator.tac file which can then be run
via twistd. it stashes its .pem and portnum for furl stability and
writes the furl of the key gen service to key_generator.furl, also
printing it to stdout.
by placing a key_generator.furl file into the nodes config directory
(e.g. ~/.tahoe) a node will attempt to connect to such a service, and
will use that when creating mutable files (i.e. directories) whenever
possible. if the keygen service is unavailable, it will perform the
key generation locally instead, as before.
When we establish any new connection, reset the delays on all the other
Reconnectors. This will trigger a new batch of connection attempts. The idea
is to detect when we (the client) have been offline for a while, and to
connect to all servers when we get back online. By accelerating the timers
inside the Reconnectors, we try to avoid spending a long time in a
partially-connected state (which increases the chances of causing problems
with mutable files, by not updating all the shares that we ought to).
not status instances. Fix this. The symptom was that following a link like
'up-123' that referred to an old operation (no longer in memory) while an
upload was active would get an ugly traceback instead of a "no such resource"
message.