Nowadays pkg_resources is a runtime requirement, and if there is something screwed up in the installation, we want an explicit ImportError exception as early as possible.
Jake Edge tried Tahoe out and didn't notice the /status page. Hopefully with this new organization people like he will see that link more easily. This also addresses drewp's suggestion that the controls appear above the list of servers instead of below. (I think that was his suggestion.) I also reordered the controls.
This is because I sometimes log or examine a repr of an object from a function which is being called from that object's own __init__()...
(I'm committing this patch in order to test our patch management infrastructure.)
I'd implemented stats gathering hooks in the helper a while back.
Brian did the same without reference to my changes. This reconciles
those two changes, encompassing all the stats in both changes,
implemented through the stats_provider interface.
this also provide templates for all 10 helper graphs in the
tahoe-stats munin plugin.
adds a stats_producer for the upload helper, which provides a series of counters
to the stats gatherer, under the name 'chk_upload_helper'.
it examines both the 'incoming' directory, and the 'encoding' dir, providing
inc_count inc_size inc_size_old enc_count enc_size enc_size_old, respectively
the number of files in each dir, the total size thereof, and the aggregate
size of all files older than 48hrs
the windows (cygwin) buildslave has been failing the key generator test
it turns out that the time check on whether to refill the pool, and the
reactor, are interacting such that when the maybe_refill_pool call posted
on the reactor fires, the test on whether to fill the pool fails.
this adds a loop in the failure case to retry each 1s until it is time
to refill the pool, thus mitigating this timing accuracy problem on
windows.
the RIStatsProvider interface requires that counter and stat values be
ChoiceOf(float, int, long) the recent changes to storage server to not
track 'consumed' led to returning None as the value of a counter.
this causes violations to be experienced by nodes whose stats are being
gathered.
this patch simply omits that stat if 'consumed' is not being tracked.
one of the storage servers is throwing foolscap violations about the
return value of get_stats(). this adds a log of the data returned
to the foolscap log event stream at the debug level '12' (between
NOISY(10) and OPERATIONAL(20)) hopefully this will facilitate
finding the cause of this problem.
the timeouts on uses of 'poll' were there purely to make sure a test doesn't
poll indefinitely. however having such timeouts makes tests susceptible
to premature timeouts under high load, or on slow machines. (e.g. cygwin
slaves running in virtual machines on loaded hosts)
purportedly trial by default applies a timeout to tests to prevent them
hanging out indefinitely, so these poll timeouts are redundant and cause
intermittent failures on slow hosts. hence they're more bother than they're
worth, and should be culled.
previously there was an edge case in the timing of expected behaviour
of the key_generator (w.r.t. the refresh delay and twisted/foolscap
delivery). if it took >6s for a key to be generated, then it was
possible for the pool refresh delay to transpire _during_ the
synchronous creation of a key in remote_get_rsa_key_pair. this could
lead to the timer elapsing during key creation and hence the pool
being refilled before control returned to the client.
this change ensures that the time window from a get key request
until the key gen reactor blocks to refill the pool is the time
since a request was answered, not since a request was asked.
this causes the behaviour to match expectations, as embodied in
test_keygen, even if the delay window is dropped to 0.1s
in both these cases, the timeout only serves to abort a stuck test, and
the key_generator should respond more quickly, but seeing test failures
in buildbot on some platforms suggests that the test is too susceptible
to timing issues on loaded buildslaves.
this cleans up KeyGenerator to be a service (a subservice of the
KeyGeneratorService as instantiated by the key-generator.tac app)
this means that the timer which replenishes the keypool will be
shutdown cleanly when the service is stopped.
adds checks on the key_generator service and client into the system
test 'test_mutable' such that one of the nodes (clients[3]) uses
the key_generator service, and checks that mutable file creation
in that node, via a variety of means, are all consuming keys from
the key_generator.
this adds a new service to pre-generate RSA key pairs. This allows
the expensive (i.e. slow) key generation to be placed into a process
outside the node, so that the node's reactor will not block when it
needs a key pair, but instead can retrieve them from a pool of already
generated key pairs in the key-generator service.
it adds a tahoe create-key-generator command which initialises an
empty dir with a tahoe-key-generator.tac file which can then be run
via twistd. it stashes its .pem and portnum for furl stability and
writes the furl of the key gen service to key_generator.furl, also
printing it to stdout.
by placing a key_generator.furl file into the nodes config directory
(e.g. ~/.tahoe) a node will attempt to connect to such a service, and
will use that when creating mutable files (i.e. directories) whenever
possible. if the keygen service is unavailable, it will perform the
key generation locally instead, as before.
When we establish any new connection, reset the delays on all the other
Reconnectors. This will trigger a new batch of connection attempts. The idea
is to detect when we (the client) have been offline for a while, and to
connect to all servers when we get back online. By accelerating the timers
inside the Reconnectors, we try to avoid spending a long time in a
partially-connected state (which increases the chances of causing problems
with mutable files, by not updating all the shares that we ought to).
not status instances. Fix this. The symptom was that following a link like
'up-123' that referred to an old operation (no longer in memory) while an
upload was active would get an ugly traceback instead of a "no such resource"
message.
when the confwiz configures a node (i.e. typically once on mac, once per
install on windows) in addition to writing the root_dir.cap retrieved from
the native_client backend into a config file, it additionally writes a hash
thereof into the 'convergence' config file.
this causes uploads from this node to use a consistent 'convergence' hashing
value matching any other nodes with the same configured root_dir, i.e. for
the most part other systems installed and configured on the same account.
This removes the guess-partial-information attack vector, and reduces
the amount of overhead that we consume with each file. It also introduces
a forwards-compability break: older versions of the code (before the
previous download-time "make hashes optional" patch) will be unable
to read files uploaded by this version, as they will complain about the
missing hashes. This patch is experimental, and is being pushed into
trunk to obtain test coverage. We may undo it before releasing 1.0.
Now upload or encode methods take a required argument named "convergence" which can be either None, indicating no convergent encryption at all, or a string, which is the "added secret" to be mixed in to the content hash key. If you want traditional convergent encryption behavior, set the added secret to be the empty string.
This patch also renames "content hash key" to "convergent encryption" in a argument names and variable names. (A different and larger renaming is needed in order to clarify that Tahoe supports immutable files which are not encrypted content-hash-key a.k.a. convergent encryption.)
This patch also changes a few unit tests to use non-convergent encryption, because it doesn't matter for what they are testing and non-convergent encryption is slightly faster.
This removes the guess-partial-information attack vector, and reduces
the amount of overhead that we consume with each file. It also introduces
a forwards-compability break: older versions of the code (before the
previous download-time "make hashes optional" patch) will be unable
to read files uploaded by this version, as they will complain about the
missing hashes. This patch is experimental, and is being pushed into
trunk to obtain test coverage. We may undo it before releasing 1.0.
Removing the plaintext hashes can help with the guess-partial-information
attack. This does not affect compatibility, but if and when we actually
remove any hashes from the share, that will introduce a
forwards-compatibility break: tahoe-0.9 will not be able to read such files.
this changes the confwiz to have a look and feel much more consistent
with that of the innosetup installer it is launched within the context
of. this applies, naturally, primarily to windows.
added a test for the simple mkdir-p hack I added yesterday
checks that mkdir-p can create a directory hierarchy, and that resubmitting
a request for the same path yields the existing dir's uri
this adds a t=mkdir-p call to directories (accessed by their uri as
/uri/<URI>?t=mkdir=p&path=/some/path) which returns the uri for a
directory at a specified path before the given uri, regardless of
whether the directory exists or whether intermediate directories
need to be created to satisfy the request.
this is used by the migration code in MV to optimise the work of
path traversal which was other wise done on every file PUT
This is because there exist in the wild computers that are misconfigured so that 'localhost' doesn't resolve to 127.0.0.1. On those computers, using 'localhost' for the nodeurl is a security problem, because the user commonly sends valuable caps to the nodeurl.
motivated simply by a desire to be able to identify 'noderoot' directories for
debugging and testing, the confwiz now writes an 'accountname' files based on
what account was used when the node was configured. this is not currently read
by or used by any code in the system, but helps identify directories from testing.
1. changed the node's exit-on-error behaviour. rather than logging debug and
then delegating to self for _abort_process() instead simply delegate to self
_service_startup_failed(failure) to report failures in the startup deferred
chain. subclasses then have complete control of handling and reporting any
failures in node startup.
2. replace the convoluted wx.PostEvent() glue for posting an event into the
gui thread with the simpler expedient of wx.CallAfter() which is much like
foolscap's eventually() but also thread safe for inducing a call back on the
gui thread.
in certain cases (e.g. the node.pem changed but old .furls are in private/)
the node will abort upon startup. previously it used os.abort() which in these
cases caused the mac gui app to crash on startup with no explanation.
this changes that behaviour from calling os.abort() to calling
node._abort_process(failure) which by default calls os.abort(). this allows
that method to be overridden in subclasses.
the mac app now provides and uses such a subclass of Client, so that failures
are reported to the user in a message dialog before the process exits.
this uses wx.PostEvent() with a custom event type to signal from the reactor
thread into the gui thread.
the confwiz now uses socket.gethostname() if a 'nickname' file doesn't already
exist, and passes that nickname into the 'record_install' method on the backend,
so that the moniker can be recorded in the system table.
when an operation takes 'too long', on 10.4 the user gets a dialog about
the problem with a 'force eject / keep trying' choice. on 10.5 the fuse
system seems to summarily unmount the drive.
this showed up in 10.5 testing because the time to open() a file depended
upon the size of the file, and an 8Mb test file took long enough for the
node to download that the open() call didn't respond within 60s and fuse
spontaneously ejected the drive, quitting the plugin (and cancelling the
download).
this changes the fuse options passed to the plugin by the ui when the
'mount filesystem' window is used. command line users should check out
the '-odaemon_timeout=...' option. this changes the default timeout from
60s to 300s (5min) for ui launched plugins.
this will be addressed in a deeper manner at a later date, with a more
advanced fuse subsystem which can interleave open()/read() with the
actual download of the file, only blocking when data is not downloaded
yet.
Unfinished bits: doc in webapi.txt, test handling of badly formed JSON, return reasonable HTTP response, examination of the effect of this patch on code coverage -- but I'm committing it anyway because MikeB can use it and I'm being called to dinner...
the name 'tahoe' is in the process of being removed from the windows
installer and binaries. this changes the name of the smb service the
confwiz tries to start to 'Allmydata SMB'
this adds an action to the dock menu and to the file menu (when visible)
"Mount Filesystem". This action opens a windows offering the user an
opportunity to select from any of the named *.cap files in their
.tahoe/private directory, and choose a corresponding mount point to mount
that at.
it launches the .app binary as a subprocess with the corresponding command
line arguments to launch the 'tahoe fuse' functionality to mount that file
system. if a NAME.icns file is present in .tahoe/private alonside the
chosen NAME.cap, then that icon will be used when the filesystem is mounted.
this is highly unlikely to work when running from source, since it uses
introspection on sys.executable to find the relavent binary to launch in
order to get the right built .app's 'tahoe fuse' functionality.
it is also relatively likely that the code currently checked in, hence
linked into the build, will have as yet unresolved library dependencies.
it's quite unlikely to work on 10.5 with macfuse 1.3.1 at the moment.
this provides a variety of changes to the macfuse 'tahoefuse' implementation.
most notably it extends the 'tahoe' command available through the mac build
to provide a 'fuse' subcommand, which invokes tahoefuse. this addresses
various aspects of main(argv) handling, sys.argv manipulation to provide an
appropriate command line syntax that meshes with the fuse library's built-
in command line parsing.
this provides a "tahoe fuse [dir_cap_name] [fuse_options] mountpoint"
command, where dir_cap_name is an optional name of a .cap file to be found
in ~/.tahoe/private defaulting to the standard root_dir.cap. fuse_options
if given are passed into the fuse system as its normal command line options
and the mountpoint is checked for existence before launching fuse.
the tahoe 'fuse' command is provided as an additional_command to the tahoe
runner in the case that it's launched from the mac .app binary.
this also includes a tweak to the TFS class which incorporates the ctime
and mtime of files into the tahoe fs model, if available.
runner provides the main point of entry for the 'tahoe' command, and
provides various subcommands by default. this provides a hook whereby
additional subcommands can be added in in other contexts, providing a
simple way to extend the (sub)commands space available through 'tahoe'
regardless of platform, the confwiz now opens the welcoe page upon
writing a config. it also provides a 'plat' argument (from python's
sys.platform) to help disambiguate our instructions by platform.