The code for validating the share hash tree and the block hash tree has been rewritten to make sure it handles all cases, to share metadata about the file (such as the share hash tree, block hash trees, and UEB) among different share downloads, and not to require hashes to be stored on the server unnecessarily, such as the roots of the block hash trees (not needed since they are also the leaves of the share hash tree), and the root of the share hash tree (not needed since it is also included in the UEB). It also passes the latest tests including handling corrupted shares well.
ValidatedReadBucketProxy takes a share_hash_tree argument to its constructor, which is a reference to a share hash tree shared by all ValidatedReadBucketProxies for that immutable file download.
ValidatedReadBucketProxy requires the block_size and share_size to be provided in its constructor, and it then uses those to compute the offsets and lengths of blocks when it needs them, instead of reading those values out of the share. The user of ValidatedReadBucketProxy therefore has to have first used a ValidatedExtendedURIProxy to compute those two values from the validated contents of the URI. This is pleasingly simplifies safety analysis: the client knows which span of bytes corresponds to a given block from the validated URI data, rather than from the unvalidated data stored on the storage server. It also simplifies unit testing of verifier/repairer, because now it doesn't care about the contents of the "share size" and "block size" fields in the share. It does not relieve the need for share data v2 layout, because we still need to store and retrieve the offsets of the fields which come after the share data, therefore we still need to use share data v2 with its 8-byte fields if we want to store share data larger than about 2^32.
Specify which subset of the block hashes and share hashes you need while downloading a particular share. In the future this will hopefully be used to fetch only a subset, for network efficiency, but currently all of them are fetched, regardless of which subset you specify.
ReadBucketProxy hides the question of whether it has "started" or not (sent a request to the server to get metadata) from its user.
Download is optimized to do as few roundtrips and as few requests as possible, hopefully speeding up download a bit.
This does raise the question of if there is any point to this test, since I apparently don't know what the answer *should* be, and whenever one of the buildbots fails then I redefine success.
But, I'm about to commit a bunch of patches to implement checker, verifier, and repairer as well as to refactor downloader, and I would really like to know if these patches *increase* the number of reads required even higher than it currently is.
To fix this error from the Windows buildslave:
[ERROR]: allmydata.test.test_immutable.Test.test_download_from_only_3_remaining_shares
Traceback (most recent call last):
File "C:\Documents and Settings\buildslave\windows-native-tahoe\windows\build\src\allmydata\immutable\download.py", line 135, in _bad
raise NotEnoughSharesError("ran out of peers, last error was %s" % (f,))
allmydata.interfaces.NotEnoughSharesError: ran out of peers, last error was [Failure instance: Traceback: <class 'struct.error'>: unpack requires a string argument of length 4
c:\documents and settings\buildslave\windows-native-tahoe\windows\build\support\lib\site-packages\foolscap-0.3.2-py2.5.egg\foolscap\call.py:667:_done
c:\documents and settings\buildslave\windows-native-tahoe\windows\build\support\lib\site-packages\foolscap-0.3.2-py2.5.egg\foolscap\call.py:53:complete
c:\Python25\lib\site-packages\twisted\internet\defer.py:239:callback
c:\Python25\lib\site-packages\twisted\internet\defer.py:304:_startRunCallbacks
--- <exception caught here> ---
c:\Python25\lib\site-packages\twisted\internet\defer.py:317:_runCallbacks
C:\Documents and Settings\buildslave\windows-native-tahoe\windows\build\src\allmydata\immutable\layout.py:374:_got_length
C:\Python25\lib\struct.py:87:unpack
]
===============================================================================
One of the instances of the name accidentally didn't get changed, and pyflakes noticed. The new downloader/checker/verifier/repairer unit tests would also have noticed, but those tests haven't been rolled into a patch and applied to this repo yet...
Nathan Wilcox observed that the storage server can rely on the size of the share file combined with the count of leases to unambiguously identify the location of the leases. This means that it can hold any size share data, even though the field nominally used to hold the size of the share data is only 32 bits wide.
With this patch, the storage server still writes the "size of the share data" field (just in case the server gets downgraded to an earlier version which requires that field, or the share file gets moved to another server which is of an earlier vintage), but it doesn't use it. Also, with this patch, the server no longer rejects requests to write shares which are >= 2^32 bytes in size, and it no longer rejects attempts to read such shares.
This fixes http://allmydata.org/trac/tahoe/ticket/346 (increase share-size field to 8 bytes, remove 12GiB filesize limit), although there remains open a question of how clients know that a given server can handle large shares (by using the new versioning scheme, probably).
Note that share size is also limited by another factor -- how big of a file we can store on the local filesystem on the server. Currently allmydata.com typically uses ext3 and I think we typically have block size = 4 KiB, which means that the largest file is about 2 TiB. Also, the hard drives themselves are only 1 TB, so the largest share is definitely slightly less than 1 TB, which means (when K == 3), the largest file is less than 3 TB.
This patch also refactors the creation of new sharefiles so that only a single fopen() is used.
This patch also helps with the unit-testing of repairer, since formerly it was unclear what repairer should expect to find if the "share data size" field was corrupted (some corruptions would have no effect, others would cause failure to download). Now it is clear that repairer is not required to notice if this field is corrupted since it has no effect on download. :-)
This facilitates client code to easily catch ServerFailures without also catching exceptions arising from client-side code.
See also:
http://foolscap.lothar.com/trac/ticket/105 # make it easy to distinguish server-side failures/exceptions from client-side
There are a lot of different ways that a share could be corrupted, or that attempting to download it might fail. These tests attempt to exercise many of those ways and require the checker/verifier/repairer to handle each kind of failure well.
Because the unit tests on the VirtualZooko buildslave failed when it took 16 seconds for a process to go away.
Perhaps getting notification after only 5 seconds instead of 20 seconds is desirable, and we should change the unit tests and set this back to 5, but I don't know exactly how to change the unit tests. Perhaps match this particular warning message about the shutdown taking a while and allow the code under test to pass if the only stderr that it emits is this warning.
We're just going to mark unicode in the cli as unsupported for tahoe-lafs-1.3.0. Unicode filenames on the command-line do actually work for some platforms and probably only if the platform encoding is utf-8, but I'm not sure, and in any case for it to be marked as "supported" it would have to work on all platforms, be thoroughly tested, and also we would have to understand why it worked. :-)
Also encode all args to urllib as utf-8 because urllib doesn't handle unicode objects.
I'm not sure if it is appropriate to *assume* utf-8 encoding of cli args. Perhaps the Right thing to do is to detect the platform encoding. Any ideas?
This patch is mostly due to François Deppierraz.
In an ancient version of directories, we needed a MAC on each entry. In modern times, the entire dirnode comes with a digital signature, so the MAC on each entry is redundant.
With this patch, we no longer check those MACs when reading directories, but we still produce them so that older readers will accept directories that we write.
I get confused about whether a given argument or return value is a uri-as-string or uri-as-object. This patch adds a lot of assertions that it is one or the other, and also changes CheckerResults to take objects not strings.
In the future, I hope that we generally use Python objects except when importing into or exporting from the Python interpreter e.g. over the wire, the UI, or a stored file.
Tahoe webapi was failing on HTTP request containing a partial Range header.
This change allows movies players like mplayer to seek in movie files stored in
tahoe.
Associated tests for GET and HEAD methods are also included
Refactor into a class the logic of asking each server in turn until one of them gives an answer
that validates. It is called ValidatedThingObtainer.
Refactor the downloading and verification of the URI Extension Block into a class named
ValidatedExtendedURIProxy.
The new logic of validating UEBs is minimalist: it doesn't require the UEB to contain any
unncessary information, but of course it still accepts such information for backwards
compatibility (so that this new download code is able to download files uploaded with old, and
for that matter with current, upload code).
The new logic of validating UEBs follows the practice of doing all validation up front. This
practice advises one to isolate the validation of incoming data into one place, so that all of
the rest of the code can assume only valid data.
If any redundant information is present in the UEB+URI, the new code cross-checks and asserts
that it is all fully consistent. This closes some issues where the uploader could have
uploaded inconsistent redundant data, which would probably have caused the old downloader to
simply reject that download after getting a Python exception, but perhaps could have caused
greater harm to the old downloader.
I removed the notion of selecting an erasure codec from codec.py based on the string that was
passed in the UEB. Currently "crs" is the only such string that works, so
"_assert(codec_name == 'crs')" is simpler and more explicit. This is also in keeping with the
"validate up front" strategy -- now if someone sets a different string than "crs" in their UEB,
the downloader will reject the download in the "validate this UEB" function instead of in a
separate "select the codec instance" function.
I removed the code to check plaintext hashes and plaintext Merkle Trees. Uploaders do not
produce this information any more (since it potentially exposes confidential information about
the file), and the unit tests for it were disabled. The downloader before this patch would
check that plaintext hash or plaintext merkle tree if they were present, but not complain if
they were absent. The new downloader in this patch complains if they are present and doesn't
check them. (We might in the future re-introduce such hashes over the plaintext, but encrypt
the hashes which are stored in the UEB to preserve confidentiality. This would be a double-
check on the correctness of our own source code -- the current Merkle Tree over the ciphertext
is already sufficient to guarantee the integrity of the download unless there is a bug in our
Merkle Tree or AES implementation.)
This patch increases the lines-of-code count by 8 (from 17,770 to 17,778), and reduces the
uncovered-by-tests lines-of-code count by 24 (from 1408 to 1384). Those numbers would be more
meaningful if we omitted src/allmydata/util/ from the test-coverage statistics.
though it seemed to work before the 'fstype' passed to fuse of 'allmydata' was
today throwing errors that len(fstype) must be at most 7.
fixed a typo in changes to 'mount_filesystem()' args
bumped the delay between mounting a filesystem and 'open'ing it in Finder to
4s, as it seems to take a little longer to mount now the client and server
fuse processes need to coordinate.
a handful of code cleanup, renaming and refactoring. basically consolidating
'application logic' (mount/unmount fs) into the 'MacGuiApp' class (the wx.App)
and cleaning up various scoping things around that. renamed all references to
'app' to refer more clearly to the 'AppContainer' or to the guiapp.
globally renamed basedir -> nodedir
also made the guiapp keep a note of each filesystem it mounts, and unmount
them upon 'quit' so as to cleanup the user's environment before the tahoe node
vanishes from out underneath the orphaned tahoe fuse processes
this changes the 'open webroot' menu item to be a submenu listing all aliases
defined in ~/.tahoe. Note that the dock menu does not support submenus, so it
only offers a single 'open webroot' option for the default tahoe: alias.
I had trouble with this at first and concluded that the submenus didn't work,
and made it a distinct 'WebUI' menu in it's own right. on further inspection,
there are still problems but they seem to be something like once the dock menu
has been used, sometimes the app's main menubar menus will cease to function,
and this happens regardless of whether submenus or plain simple menus are used.
I have no idea what the peoblem is, but it's not submenu specific.
These constraints were originally intended to protect against attacks on the
storage server protocol layer which exhaust memory in the peer. However,
defending against that sort of DoS is hard -- probably it isn't completely
achieved -- and it costs development time to think about it, and it sometimes
imposes limits on legitimate users which we don't necessarily want to impose.
So, for now we forget about limiting the amount of RAM that a foolscap peer can
cause you to start using.
Make sure the file can actually be downloaded afterward, that it used one of the
deleted and then repaired shares to do so, and that it repairs from multiple
deletions at once (without using more than a reasonable amount of calls to
storage server allocate).
chatting with peter, two things the mac gui needed were the ability to mount
the 'allmydata drive' automatically upon launching the app, and open the
Finder to reveal it. (also a request to hide the debug 'open webroot' stuff)
this (somewhat rough) patch implements all the above as default behaviour
it also contains a quick configuration mechanism for the gui - rather than a
preferences gui, running with a more 'tahoe' styled mechanism, the contents
of a few optional files can modify the default behaviour, specifically file
in ~/.tahoe/gui.conf control behaviour as follows:
auto-mount (bool): if set (the default) then the mac app will, upon launch
automatically mount the 'tahoe:' alias with the display name 'Allmydata'
using a mountpoint of ~/.tahoe/mnt/__auto__
auto-open (bool): if set (the default) then upon mounting a file system
(including the auto-mount if set) finder will be opened to the mountpoint
of the filesystem, which essentially reveals the newly mounted drive in a
Finder window
show-webopen (bool): if set (false by default) then the 'open webroot'
action will be made available in both the dock and file menus of the app
daemon-timout (int): sets the daemon-timeout option passed into tahoe fuse
when a filesystem is mounted. this defaults to 5 min
files of type (int) much, naturally contain a parsable int representation.
files of type (bool) are considered true if their (case-insensitive) contents
are any of ['y', 'yes', 'true', 'on', '1'] and considered false otherwise.
both peter and I independently tried to do the same thing to eliminate the
authorized_keys file which was causing problems with the broken mac build
(c.f. #522) namely mv authorized_keys.8223{,.bak} but the node is, ahem,
let's say 'intolerant' of the trailing .bak - rather than disable the
manhole as one might expect, it instead causes the node to explode on
startup. this patch makes it skip over anything that doesn't pass the
'parse this trailing stuff as an int' test.
the tahoefuse command line options changed to support the runtests harness,
and as part of that gained support for named aliases via --alias
this changes the mac app's invocation of tahoefuse to match that, and also
changes the gui to present the list of defined aliases as valid mounts
this replaces the previous logic which examined the ~/.tahoe/private directory
looking for files ending in '.cap' - an ad-hoc alias mechanism.
if a file is found matching ~/.tahoe/private/ALIASNAME.icns then that will still
be passed to tahoefuse as the icon to display for that filesystem. if no such
file is found, the allmydata icon will be used by default.
the '-olocal' option is passed to tahoefuse. this is potentially contentious.
specifically this is telling the OS that this is a 'local' filesystem, which is
intended to be used to locally attached devices. however leopard (OSX 10.5)
will only display non-local filesystems in the Finder's side bar if they are of
fs types specifically known by Finder to be network file systems (nfs, cifs,
webdav, afp) hence the -olocal flag is the only way on leopard to cause finder
to display the mounted filesystem in the sidebar, but it displays as a 'device'.
there is a potential (i.e. the fuse docs carry warnings) that this may cause
vague and unspecified undesirable behaviour.
(c.f. http://code.google.com/p/macfuse/wiki/FAQ specifically Q4.3 and Q4.1)
in the recent reconciliation of webopen patches, I wound up adjusting
webopen to 'pass through' the state of the trailing slash on the given
argument to the resultant url passed to the browser. this change
removes the requirement that arguments must be directories, and allows
webopen to be used with files. it also broke the tests that assumed
that webopen would always normalise the url to have a trailing slash.
in fixing the tests, I realised that, IMHO, there's something deeply
awry with the way tahoe handles paths; specifically in the combination
of '/' being the name of the root path within an alias, but a leading
slash on paths, e.g. 'alias:/path', is catagorically incorrect. i.e.
'tahoe:' == 'tahoe:/' == '/'
but 'tahoe:/foo' is an invalid path, and must be 'tahoe:foo'
I wound up making the internals of webopen simply spot a 'path' of
'/' and smash it to '', which 'fixes' webopen to match the behaviour
of tahoe's path handling elsewhere, but that special case sort of
points to the weirdness.
(fwiw, I personally found the fact that the leading / in a path was
disallowed to be weird - I'm just used to seeing paths qualified by
the leading / I guess - so in a debate about normalising path handling
I'd vote to include the /)
I think this is largely attributable to a cleanup patch I'd made
which never got committed upstream somehow, but at any rate various
conflicting changes to webopen had been made. This cleans up the
conflicts therein, and hopefully brings 'tahoe webopen' in line with
other cli commands.
moved the body of webopen out of cli.py into tahoe_webopen.py
made its invocation consistent with the other cli commands, most
notably replacing its 'vdrive path' with the same alias parsing,
allowing usage such as 'tahoe webopen private:Pictures/xti'
This should make it sufficiently fast, while still giving a better answer on Ubuntu than platform.dist() currently does, and also falling back to lsb_release if platform.dist() says that it doesn't know.
* the two kinds of immutable filenode now have a common base class
* they store only an instance of their URI, not both an instance and a string
* they delegate comparison to that instance
to make the internal ones use binary strings (nodeid, storage index) and
the web/JSON ones use base32-encoded strings. The immutable verifier is
still incomplete (it returns imaginary healty results).
It would still pass the test if it noticed a corrupted share. (It won't
notice, of course.) But it is required to do its work without causing storage
servers to read blocks from the filesystem.
It seems like we no longer need it, and it screws up something internal in
trial which causes trial's TestCase.mktemp() method to exhibit wrong behavior
(always using a certain test method name instead of using the current test
method name), and I wish to use TestCase.mktemp().
Of course, it is possible that the buildbot is about to tell me that we do
still require SignalMixin on some of our platforms...
This fixes#491 (URIs do not refer to unique files in Allmydata Tahoe).
Fortunately all of the versions of Tahoe currently in use are already producing
this ciphertext hash tree when uploading, so there is no
backwards-compatibility problem with having the downloader require it to be
present.
Removed the Checker service, removed checker results storage (both in-memory
and the tiny stub of sqlite-based storage). Added ICheckable, all
check/verify is now done by calling the check() method on filenodes and
dirnodes (immutable files, literal files, mutable files, and directory
instances).
Checker results are returned in a Results instance, with an html() method for
display. Checker results have been temporarily removed from the wui directory
listing until we make some other fixes.
Also fixed client.create_node_from_uri() to create LiteralFileNodes properly,
since they have different checking behavior. Previously we were creating full
FileNodes with LIT uris inside, which were downloadable but not checkable.
this fixes the test_web test test_POST_upload_mutable which chdir's into
a /foo subdirectory, but then later asserts that the web ui redirects the
user back to /foo, which should really be /foo/ since it's a directory.
when rendering a directory in webish, if the url did not correctly include
a trailing slash '/' then the browser will be redirected to a url which
does. this causes any relative links (e.g. the 'other representations'
which are links to files relative to the current directory) to be correctly
intrpreted as relative to the directory in question by the browser.
uses twisted's addSlash mechanism, which is designed for this purpose :-)
Nowadays pkg_resources is a runtime requirement, and if there is something screwed up in the installation, we want an explicit ImportError exception as early as possible.
Jake Edge tried Tahoe out and didn't notice the /status page. Hopefully with this new organization people like he will see that link more easily. This also addresses drewp's suggestion that the controls appear above the list of servers instead of below. (I think that was his suggestion.) I also reordered the controls.
This is because I sometimes log or examine a repr of an object from a function which is being called from that object's own __init__()...
(I'm committing this patch in order to test our patch management infrastructure.)
I'd implemented stats gathering hooks in the helper a while back.
Brian did the same without reference to my changes. This reconciles
those two changes, encompassing all the stats in both changes,
implemented through the stats_provider interface.
this also provide templates for all 10 helper graphs in the
tahoe-stats munin plugin.
adds a stats_producer for the upload helper, which provides a series of counters
to the stats gatherer, under the name 'chk_upload_helper'.
it examines both the 'incoming' directory, and the 'encoding' dir, providing
inc_count inc_size inc_size_old enc_count enc_size enc_size_old, respectively
the number of files in each dir, the total size thereof, and the aggregate
size of all files older than 48hrs
the windows (cygwin) buildslave has been failing the key generator test
it turns out that the time check on whether to refill the pool, and the
reactor, are interacting such that when the maybe_refill_pool call posted
on the reactor fires, the test on whether to fill the pool fails.
this adds a loop in the failure case to retry each 1s until it is time
to refill the pool, thus mitigating this timing accuracy problem on
windows.
the RIStatsProvider interface requires that counter and stat values be
ChoiceOf(float, int, long) the recent changes to storage server to not
track 'consumed' led to returning None as the value of a counter.
this causes violations to be experienced by nodes whose stats are being
gathered.
this patch simply omits that stat if 'consumed' is not being tracked.
one of the storage servers is throwing foolscap violations about the
return value of get_stats(). this adds a log of the data returned
to the foolscap log event stream at the debug level '12' (between
NOISY(10) and OPERATIONAL(20)) hopefully this will facilitate
finding the cause of this problem.
the timeouts on uses of 'poll' were there purely to make sure a test doesn't
poll indefinitely. however having such timeouts makes tests susceptible
to premature timeouts under high load, or on slow machines. (e.g. cygwin
slaves running in virtual machines on loaded hosts)
purportedly trial by default applies a timeout to tests to prevent them
hanging out indefinitely, so these poll timeouts are redundant and cause
intermittent failures on slow hosts. hence they're more bother than they're
worth, and should be culled.
previously there was an edge case in the timing of expected behaviour
of the key_generator (w.r.t. the refresh delay and twisted/foolscap
delivery). if it took >6s for a key to be generated, then it was
possible for the pool refresh delay to transpire _during_ the
synchronous creation of a key in remote_get_rsa_key_pair. this could
lead to the timer elapsing during key creation and hence the pool
being refilled before control returned to the client.
this change ensures that the time window from a get key request
until the key gen reactor blocks to refill the pool is the time
since a request was answered, not since a request was asked.
this causes the behaviour to match expectations, as embodied in
test_keygen, even if the delay window is dropped to 0.1s
in both these cases, the timeout only serves to abort a stuck test, and
the key_generator should respond more quickly, but seeing test failures
in buildbot on some platforms suggests that the test is too susceptible
to timing issues on loaded buildslaves.
this cleans up KeyGenerator to be a service (a subservice of the
KeyGeneratorService as instantiated by the key-generator.tac app)
this means that the timer which replenishes the keypool will be
shutdown cleanly when the service is stopped.
adds checks on the key_generator service and client into the system
test 'test_mutable' such that one of the nodes (clients[3]) uses
the key_generator service, and checks that mutable file creation
in that node, via a variety of means, are all consuming keys from
the key_generator.
this adds a new service to pre-generate RSA key pairs. This allows
the expensive (i.e. slow) key generation to be placed into a process
outside the node, so that the node's reactor will not block when it
needs a key pair, but instead can retrieve them from a pool of already
generated key pairs in the key-generator service.
it adds a tahoe create-key-generator command which initialises an
empty dir with a tahoe-key-generator.tac file which can then be run
via twistd. it stashes its .pem and portnum for furl stability and
writes the furl of the key gen service to key_generator.furl, also
printing it to stdout.
by placing a key_generator.furl file into the nodes config directory
(e.g. ~/.tahoe) a node will attempt to connect to such a service, and
will use that when creating mutable files (i.e. directories) whenever
possible. if the keygen service is unavailable, it will perform the
key generation locally instead, as before.
When we establish any new connection, reset the delays on all the other
Reconnectors. This will trigger a new batch of connection attempts. The idea
is to detect when we (the client) have been offline for a while, and to
connect to all servers when we get back online. By accelerating the timers
inside the Reconnectors, we try to avoid spending a long time in a
partially-connected state (which increases the chances of causing problems
with mutable files, by not updating all the shares that we ought to).
not status instances. Fix this. The symptom was that following a link like
'up-123' that referred to an old operation (no longer in memory) while an
upload was active would get an ugly traceback instead of a "no such resource"
message.
when the confwiz configures a node (i.e. typically once on mac, once per
install on windows) in addition to writing the root_dir.cap retrieved from
the native_client backend into a config file, it additionally writes a hash
thereof into the 'convergence' config file.
this causes uploads from this node to use a consistent 'convergence' hashing
value matching any other nodes with the same configured root_dir, i.e. for
the most part other systems installed and configured on the same account.
This removes the guess-partial-information attack vector, and reduces
the amount of overhead that we consume with each file. It also introduces
a forwards-compability break: older versions of the code (before the
previous download-time "make hashes optional" patch) will be unable
to read files uploaded by this version, as they will complain about the
missing hashes. This patch is experimental, and is being pushed into
trunk to obtain test coverage. We may undo it before releasing 1.0.
Now upload or encode methods take a required argument named "convergence" which can be either None, indicating no convergent encryption at all, or a string, which is the "added secret" to be mixed in to the content hash key. If you want traditional convergent encryption behavior, set the added secret to be the empty string.
This patch also renames "content hash key" to "convergent encryption" in a argument names and variable names. (A different and larger renaming is needed in order to clarify that Tahoe supports immutable files which are not encrypted content-hash-key a.k.a. convergent encryption.)
This patch also changes a few unit tests to use non-convergent encryption, because it doesn't matter for what they are testing and non-convergent encryption is slightly faster.
This removes the guess-partial-information attack vector, and reduces
the amount of overhead that we consume with each file. It also introduces
a forwards-compability break: older versions of the code (before the
previous download-time "make hashes optional" patch) will be unable
to read files uploaded by this version, as they will complain about the
missing hashes. This patch is experimental, and is being pushed into
trunk to obtain test coverage. We may undo it before releasing 1.0.
Removing the plaintext hashes can help with the guess-partial-information
attack. This does not affect compatibility, but if and when we actually
remove any hashes from the share, that will introduce a
forwards-compatibility break: tahoe-0.9 will not be able to read such files.
this changes the confwiz to have a look and feel much more consistent
with that of the innosetup installer it is launched within the context
of. this applies, naturally, primarily to windows.
added a test for the simple mkdir-p hack I added yesterday
checks that mkdir-p can create a directory hierarchy, and that resubmitting
a request for the same path yields the existing dir's uri
this adds a t=mkdir-p call to directories (accessed by their uri as
/uri/<URI>?t=mkdir=p&path=/some/path) which returns the uri for a
directory at a specified path before the given uri, regardless of
whether the directory exists or whether intermediate directories
need to be created to satisfy the request.
this is used by the migration code in MV to optimise the work of
path traversal which was other wise done on every file PUT
This is because there exist in the wild computers that are misconfigured so that 'localhost' doesn't resolve to 127.0.0.1. On those computers, using 'localhost' for the nodeurl is a security problem, because the user commonly sends valuable caps to the nodeurl.
motivated simply by a desire to be able to identify 'noderoot' directories for
debugging and testing, the confwiz now writes an 'accountname' files based on
what account was used when the node was configured. this is not currently read
by or used by any code in the system, but helps identify directories from testing.
1. changed the node's exit-on-error behaviour. rather than logging debug and
then delegating to self for _abort_process() instead simply delegate to self
_service_startup_failed(failure) to report failures in the startup deferred
chain. subclasses then have complete control of handling and reporting any
failures in node startup.
2. replace the convoluted wx.PostEvent() glue for posting an event into the
gui thread with the simpler expedient of wx.CallAfter() which is much like
foolscap's eventually() but also thread safe for inducing a call back on the
gui thread.
in certain cases (e.g. the node.pem changed but old .furls are in private/)
the node will abort upon startup. previously it used os.abort() which in these
cases caused the mac gui app to crash on startup with no explanation.
this changes that behaviour from calling os.abort() to calling
node._abort_process(failure) which by default calls os.abort(). this allows
that method to be overridden in subclasses.
the mac app now provides and uses such a subclass of Client, so that failures
are reported to the user in a message dialog before the process exits.
this uses wx.PostEvent() with a custom event type to signal from the reactor
thread into the gui thread.
the confwiz now uses socket.gethostname() if a 'nickname' file doesn't already
exist, and passes that nickname into the 'record_install' method on the backend,
so that the moniker can be recorded in the system table.
when an operation takes 'too long', on 10.4 the user gets a dialog about
the problem with a 'force eject / keep trying' choice. on 10.5 the fuse
system seems to summarily unmount the drive.
this showed up in 10.5 testing because the time to open() a file depended
upon the size of the file, and an 8Mb test file took long enough for the
node to download that the open() call didn't respond within 60s and fuse
spontaneously ejected the drive, quitting the plugin (and cancelling the
download).
this changes the fuse options passed to the plugin by the ui when the
'mount filesystem' window is used. command line users should check out
the '-odaemon_timeout=...' option. this changes the default timeout from
60s to 300s (5min) for ui launched plugins.
this will be addressed in a deeper manner at a later date, with a more
advanced fuse subsystem which can interleave open()/read() with the
actual download of the file, only blocking when data is not downloaded
yet.
Unfinished bits: doc in webapi.txt, test handling of badly formed JSON, return reasonable HTTP response, examination of the effect of this patch on code coverage -- but I'm committing it anyway because MikeB can use it and I'm being called to dinner...
the name 'tahoe' is in the process of being removed from the windows
installer and binaries. this changes the name of the smb service the
confwiz tries to start to 'Allmydata SMB'
this adds an action to the dock menu and to the file menu (when visible)
"Mount Filesystem". This action opens a windows offering the user an
opportunity to select from any of the named *.cap files in their
.tahoe/private directory, and choose a corresponding mount point to mount
that at.
it launches the .app binary as a subprocess with the corresponding command
line arguments to launch the 'tahoe fuse' functionality to mount that file
system. if a NAME.icns file is present in .tahoe/private alonside the
chosen NAME.cap, then that icon will be used when the filesystem is mounted.
this is highly unlikely to work when running from source, since it uses
introspection on sys.executable to find the relavent binary to launch in
order to get the right built .app's 'tahoe fuse' functionality.
it is also relatively likely that the code currently checked in, hence
linked into the build, will have as yet unresolved library dependencies.
it's quite unlikely to work on 10.5 with macfuse 1.3.1 at the moment.
this provides a variety of changes to the macfuse 'tahoefuse' implementation.
most notably it extends the 'tahoe' command available through the mac build
to provide a 'fuse' subcommand, which invokes tahoefuse. this addresses
various aspects of main(argv) handling, sys.argv manipulation to provide an
appropriate command line syntax that meshes with the fuse library's built-
in command line parsing.
this provides a "tahoe fuse [dir_cap_name] [fuse_options] mountpoint"
command, where dir_cap_name is an optional name of a .cap file to be found
in ~/.tahoe/private defaulting to the standard root_dir.cap. fuse_options
if given are passed into the fuse system as its normal command line options
and the mountpoint is checked for existence before launching fuse.
the tahoe 'fuse' command is provided as an additional_command to the tahoe
runner in the case that it's launched from the mac .app binary.
this also includes a tweak to the TFS class which incorporates the ctime
and mtime of files into the tahoe fs model, if available.
runner provides the main point of entry for the 'tahoe' command, and
provides various subcommands by default. this provides a hook whereby
additional subcommands can be added in in other contexts, providing a
simple way to extend the (sub)commands space available through 'tahoe'