The code for validating the share hash tree and the block hash tree has been rewritten to make sure it handles all cases, to share metadata about the file (such as the share hash tree, block hash trees, and UEB) among different share downloads, and not to require hashes to be stored on the server unnecessarily, such as the roots of the block hash trees (not needed since they are also the leaves of the share hash tree), and the root of the share hash tree (not needed since it is also included in the UEB). It also passes the latest tests including handling corrupted shares well.
ValidatedReadBucketProxy takes a share_hash_tree argument to its constructor, which is a reference to a share hash tree shared by all ValidatedReadBucketProxies for that immutable file download.
ValidatedReadBucketProxy requires the block_size and share_size to be provided in its constructor, and it then uses those to compute the offsets and lengths of blocks when it needs them, instead of reading those values out of the share. The user of ValidatedReadBucketProxy therefore has to have first used a ValidatedExtendedURIProxy to compute those two values from the validated contents of the URI. This is pleasingly simplifies safety analysis: the client knows which span of bytes corresponds to a given block from the validated URI data, rather than from the unvalidated data stored on the storage server. It also simplifies unit testing of verifier/repairer, because now it doesn't care about the contents of the "share size" and "block size" fields in the share. It does not relieve the need for share data v2 layout, because we still need to store and retrieve the offsets of the fields which come after the share data, therefore we still need to use share data v2 with its 8-byte fields if we want to store share data larger than about 2^32.
Specify which subset of the block hashes and share hashes you need while downloading a particular share. In the future this will hopefully be used to fetch only a subset, for network efficiency, but currently all of them are fetched, regardless of which subset you specify.
ReadBucketProxy hides the question of whether it has "started" or not (sent a request to the server to get metadata) from its user.
Download is optimized to do as few roundtrips and as few requests as possible, hopefully speeding up download a bit.
This does raise the question of if there is any point to this test, since I apparently don't know what the answer *should* be, and whenever one of the buildbots fails then I redefine success.
But, I'm about to commit a bunch of patches to implement checker, verifier, and repairer as well as to refactor downloader, and I would really like to know if these patches *increase* the number of reads required even higher than it currently is.
To fix this error from the Windows buildslave:
[ERROR]: allmydata.test.test_immutable.Test.test_download_from_only_3_remaining_shares
Traceback (most recent call last):
File "C:\Documents and Settings\buildslave\windows-native-tahoe\windows\build\src\allmydata\immutable\download.py", line 135, in _bad
raise NotEnoughSharesError("ran out of peers, last error was %s" % (f,))
allmydata.interfaces.NotEnoughSharesError: ran out of peers, last error was [Failure instance: Traceback: <class 'struct.error'>: unpack requires a string argument of length 4
c:\documents and settings\buildslave\windows-native-tahoe\windows\build\support\lib\site-packages\foolscap-0.3.2-py2.5.egg\foolscap\call.py:667:_done
c:\documents and settings\buildslave\windows-native-tahoe\windows\build\support\lib\site-packages\foolscap-0.3.2-py2.5.egg\foolscap\call.py:53:complete
c:\Python25\lib\site-packages\twisted\internet\defer.py:239:callback
c:\Python25\lib\site-packages\twisted\internet\defer.py:304:_startRunCallbacks
--- <exception caught here> ---
c:\Python25\lib\site-packages\twisted\internet\defer.py:317:_runCallbacks
C:\Documents and Settings\buildslave\windows-native-tahoe\windows\build\src\allmydata\immutable\layout.py:374:_got_length
C:\Python25\lib\struct.py:87:unpack
]
===============================================================================
One of the instances of the name accidentally didn't get changed, and pyflakes noticed. The new downloader/checker/verifier/repairer unit tests would also have noticed, but those tests haven't been rolled into a patch and applied to this repo yet...
Nathan Wilcox observed that the storage server can rely on the size of the share file combined with the count of leases to unambiguously identify the location of the leases. This means that it can hold any size share data, even though the field nominally used to hold the size of the share data is only 32 bits wide.
With this patch, the storage server still writes the "size of the share data" field (just in case the server gets downgraded to an earlier version which requires that field, or the share file gets moved to another server which is of an earlier vintage), but it doesn't use it. Also, with this patch, the server no longer rejects requests to write shares which are >= 2^32 bytes in size, and it no longer rejects attempts to read such shares.
This fixes http://allmydata.org/trac/tahoe/ticket/346 (increase share-size field to 8 bytes, remove 12GiB filesize limit), although there remains open a question of how clients know that a given server can handle large shares (by using the new versioning scheme, probably).
Note that share size is also limited by another factor -- how big of a file we can store on the local filesystem on the server. Currently allmydata.com typically uses ext3 and I think we typically have block size = 4 KiB, which means that the largest file is about 2 TiB. Also, the hard drives themselves are only 1 TB, so the largest share is definitely slightly less than 1 TB, which means (when K == 3), the largest file is less than 3 TB.
This patch also refactors the creation of new sharefiles so that only a single fopen() is used.
This patch also helps with the unit-testing of repairer, since formerly it was unclear what repairer should expect to find if the "share data size" field was corrupted (some corruptions would have no effect, others would cause failure to download). Now it is clear that repairer is not required to notice if this field is corrupted since it has no effect on download. :-)
This facilitates client code to easily catch ServerFailures without also catching exceptions arising from client-side code.
See also:
http://foolscap.lothar.com/trac/ticket/105 # make it easy to distinguish server-side failures/exceptions from client-side
There are a lot of different ways that a share could be corrupted, or that attempting to download it might fail. These tests attempt to exercise many of those ways and require the checker/verifier/repairer to handle each kind of failure well.
Because the unit tests on the VirtualZooko buildslave failed when it took 16 seconds for a process to go away.
Perhaps getting notification after only 5 seconds instead of 20 seconds is desirable, and we should change the unit tests and set this back to 5, but I don't know exactly how to change the unit tests. Perhaps match this particular warning message about the shutdown taking a while and allow the code under test to pass if the only stderr that it emits is this warning.
All issues which are relevant to users of v1.1, v1.2, or v1.3 go in known_issues.txt. All issues which are relevant to users of v1.0 go in historical/historical_known_issues.txt.
We're just going to mark unicode in the cli as unsupported for tahoe-lafs-1.3.0. Unicode filenames on the command-line do actually work for some platforms and probably only if the platform encoding is utf-8, but I'm not sure, and in any case for it to be marked as "supported" it would have to work on all platforms, be thoroughly tested, and also we would have to understand why it worked. :-)
Also encode all args to urllib as utf-8 because urllib doesn't handle unicode objects.
I'm not sure if it is appropriate to *assume* utf-8 encoding of cli args. Perhaps the Right thing to do is to detect the platform encoding. Any ideas?
This patch is mostly due to François Deppierraz.