This was somewhat sad; the assertion didn't say what path caused the
error, what went wrong. So... silently skip over things that are
neither dirs nor files.
Because the unit tests on the VirtualZooko? buildslave failed when it took 31 seconds for a process to go away.
Perhaps getting warning message after only 5 seconds instead of 40 seconds is desirable, and we should change the unit tests and set this back to 5, but I don't know exactly how to change the unit tests. Perhaps match this particular warning message about the shutdown taking a while and allow the code under test to pass if the only stderr that it emits is this warning.
The code for validating the share hash tree and the block hash tree has been rewritten to make sure it handles all cases, to share metadata about the file (such as the share hash tree, block hash trees, and UEB) among different share downloads, and not to require hashes to be stored on the server unnecessarily, such as the roots of the block hash trees (not needed since they are also the leaves of the share hash tree), and the root of the share hash tree (not needed since it is also included in the UEB). It also passes the latest tests including handling corrupted shares well.
ValidatedReadBucketProxy takes a share_hash_tree argument to its constructor, which is a reference to a share hash tree shared by all ValidatedReadBucketProxies for that immutable file download.
ValidatedReadBucketProxy requires the block_size and share_size to be provided in its constructor, and it then uses those to compute the offsets and lengths of blocks when it needs them, instead of reading those values out of the share. The user of ValidatedReadBucketProxy therefore has to have first used a ValidatedExtendedURIProxy to compute those two values from the validated contents of the URI. This is pleasingly simplifies safety analysis: the client knows which span of bytes corresponds to a given block from the validated URI data, rather than from the unvalidated data stored on the storage server. It also simplifies unit testing of verifier/repairer, because now it doesn't care about the contents of the "share size" and "block size" fields in the share. It does not relieve the need for share data v2 layout, because we still need to store and retrieve the offsets of the fields which come after the share data, therefore we still need to use share data v2 with its 8-byte fields if we want to store share data larger than about 2^32.
Specify which subset of the block hashes and share hashes you need while downloading a particular share. In the future this will hopefully be used to fetch only a subset, for network efficiency, but currently all of them are fetched, regardless of which subset you specify.
ReadBucketProxy hides the question of whether it has "started" or not (sent a request to the server to get metadata) from its user.
Download is optimized to do as few roundtrips and as few requests as possible, hopefully speeding up download a bit.
Because the unit tests on the VirtualZooko buildslave failed when it took 16 seconds for a process to go away.
Perhaps getting notification after only 5 seconds instead of 20 seconds is desirable, and we should change the unit tests and set this back to 5, but I don't know exactly how to change the unit tests. Perhaps match this particular warning message about the shutdown taking a while and allow the code under test to pass if the only stderr that it emits is this warning.
We're just going to mark unicode in the cli as unsupported for tahoe-lafs-1.3.0. Unicode filenames on the command-line do actually work for some platforms and probably only if the platform encoding is utf-8, but I'm not sure, and in any case for it to be marked as "supported" it would have to work on all platforms, be thoroughly tested, and also we would have to understand why it worked. :-)
Also encode all args to urllib as utf-8 because urllib doesn't handle unicode objects.
I'm not sure if it is appropriate to *assume* utf-8 encoding of cli args. Perhaps the Right thing to do is to detect the platform encoding. Any ideas?
This patch is mostly due to François Deppierraz.
in the recent reconciliation of webopen patches, I wound up adjusting
webopen to 'pass through' the state of the trailing slash on the given
argument to the resultant url passed to the browser. this change
removes the requirement that arguments must be directories, and allows
webopen to be used with files. it also broke the tests that assumed
that webopen would always normalise the url to have a trailing slash.
in fixing the tests, I realised that, IMHO, there's something deeply
awry with the way tahoe handles paths; specifically in the combination
of '/' being the name of the root path within an alias, but a leading
slash on paths, e.g. 'alias:/path', is catagorically incorrect. i.e.
'tahoe:' == 'tahoe:/' == '/'
but 'tahoe:/foo' is an invalid path, and must be 'tahoe:foo'
I wound up making the internals of webopen simply spot a 'path' of
'/' and smash it to '', which 'fixes' webopen to match the behaviour
of tahoe's path handling elsewhere, but that special case sort of
points to the weirdness.
(fwiw, I personally found the fact that the leading / in a path was
disallowed to be weird - I'm just used to seeing paths qualified by
the leading / I guess - so in a debate about normalising path handling
I'd vote to include the /)
I think this is largely attributable to a cleanup patch I'd made
which never got committed upstream somehow, but at any rate various
conflicting changes to webopen had been made. This cleans up the
conflicts therein, and hopefully brings 'tahoe webopen' in line with
other cli commands.
moved the body of webopen out of cli.py into tahoe_webopen.py
made its invocation consistent with the other cli commands, most
notably replacing its 'vdrive path' with the same alias parsing,
allowing usage such as 'tahoe webopen private:Pictures/xti'
this adds a new service to pre-generate RSA key pairs. This allows
the expensive (i.e. slow) key generation to be placed into a process
outside the node, so that the node's reactor will not block when it
needs a key pair, but instead can retrieve them from a pool of already
generated key pairs in the key-generator service.
it adds a tahoe create-key-generator command which initialises an
empty dir with a tahoe-key-generator.tac file which can then be run
via twistd. it stashes its .pem and portnum for furl stability and
writes the furl of the key gen service to key_generator.furl, also
printing it to stdout.
by placing a key_generator.furl file into the nodes config directory
(e.g. ~/.tahoe) a node will attempt to connect to such a service, and
will use that when creating mutable files (i.e. directories) whenever
possible. if the keygen service is unavailable, it will perform the
key generation locally instead, as before.
runner provides the main point of entry for the 'tahoe' command, and
provides various subcommands by default. this provides a hook whereby
additional subcommands can be added in in other contexts, providing a
simple way to extend the (sub)commands space available through 'tahoe'
base62 encoding fits more information into alphanumeric chars while avoiding the troublesome non-alphanumeric chars of base64 encoding. In particular, this allows us to work around the ext3 "32,000 entries in a directory" limit while retaining the convenient property that the intermediate directory names are leading prefixes of the storage index file names.