A test failed on draco (MacPPC) because it took 49 seconds to get around to running the test, and the node had already stopped itself when the hotline file was 40 seconds old.
It is currently hardcoded in setup.py to be 'allmydata-tahoe'. Ticket #556 is to make it configurable by a runtime command-line argument to setup.py: "--appname=foo", but I suddenly wondered if we really wanted that and at the same time realized that we don't need that for tahoe-1.3.0 release, so this patch just hardcodes it in setup.py.
setup.py inspects a file named 'src/allmydata/_appname.py' and assert that it contains the string "__appname__ = 'allmydata-tahoe'", and creates it if it isn't already present. src/allmydata/__init__.py import _appname and reads __appname__ from it. The rest of the Python code imports allmydata and inspects "allmydata.__appname__", although actually every use it uses "allmydata.__full_version__" instead, where "allmydata.__full_version__" is created in src/allmydata/__init__.py to be:
__full_version__ = __appname + '-' + str(__version__).
All the code that emits an "application version string" when describing what version of a protocol it supports (introducer server, storage server, upload helper), or when describing itself in general (introducer client), usese allmydata.__full_version__.
This fixes ticket #556 at least well enough for tahoe-1.3.0 release.
Obviously requiring the code under test to perform within some limit isn't very meaningful if we raise the limit whenever the test goes outside of it.
But I still don't want to remove the test code which measures how many writes (and, elsewhere, how many reads) a client does in order to fulfill these duties.
Let this number -- now 20 -- stand as an approximation of the inefficiency of our code divided by my mental model of how many operations are actually optimal for these duties.
This is important, because if the repairer doesn't completely repair all kinds of corruption (as the current one doesn't), then the successive tests get messed up by assuming that the shares were uncorrupted when the test first set about to corrupt them.
This means that the tests still work if you are executing them from a CWD other than the src dir -- *if* the "bin/tahoe" is found at os.path.dirname(os.path.dirname(os.path.dirname(allmydata.__file__))).
If no file is found at that location, then just skip the tests of executing the "tahoe" executable, because we don't want to accidentally run those tests against an executable from a different version of tahoe.
This is necessary because loading allmydata code now depends on PYTHONPATH manipulation which is done in the "bin/tahoe" script. Unfortunately it makes test_runner slower since it launches and waits for many subprocesses.
This reverses some, but not all, of the changes that were committed in the following set of patches.
rolling back:
Sun Jan 18 09:54:30 MST 2009 toby.murray
* add 'web.ambient_upload_authority' as a paramater to tahoe.cfg
M ./src/allmydata/client.py -1 +3
M ./src/allmydata/test/common.py -7 +9
A ./src/allmydata/test/test_ambient_upload_authority.py
M ./src/allmydata/web/root.py +12
M ./src/allmydata/webish.py -1 +4
Sun Jan 18 09:56:08 MST 2009 zooko@zooko.com
* trivial: whitespace
I ran emacs's "M-x whitespace-cleanup" on the files that Toby's recent patch had touched that had trailing whitespace on some lines.
M ./src/allmydata/test/test_ambient_upload_authority.py -9 +8
M ./src/allmydata/web/root.py -2 +1
M ./src/allmydata/webish.py -2 +1
Mon Jan 19 14:16:19 MST 2009 zooko@zooko.com
* trivial: remove unused import noticed by pyflakes
M ./src/allmydata/test/test_ambient_upload_authority.py -1
Mon Jan 19 21:38:35 MST 2009 toby.murray
* doc: describe web.ambient_upload_authority
M ./docs/configuration.txt +14
M ./docs/frontends/webapi.txt +11
Mon Jan 19 21:38:57 MST 2009 zooko@zooko.com
* doc: add Toby Murray to the CREDITS
M ./CREDITS +4
This implements an immutable repairer by marrying a CiphertextDownloader to a CHKUploader. It extends the IDownloadTarget interface so that the downloader can provide some metadata that the uploader requires.
The processing is incremental -- it uploads the first segments before it finishes downloading the whole file. This is necessary so that you can repair large files without running out of RAM or using a temporary file on the repairer.
It requires only a verifycap, not a readcap. That is: it doesn't need or use the decryption key, only the integrity check codes.
There are several tests marked TODO and several instances of XXX in the source code. I intend to open tickets to document further improvements to functionality and testing, but the current version is probably good enough for Tahoe-1.3.0.
Rename "downloadable" to "target".
Rename "u" to "v" in FileDownloader.__init__().
Rename "_uri" to "_verifycap" in FileDownloader.
Rename "_downloadable" to "_target" in FileDownloader.
Rename "FileDownloader" to "CiphertextDownloader".
FileDownloader takes a verify cap and produces ciphertext, instead of taking a read cap and producing plaintext.
FileDownloader does all integrity checking including the mandatory ciphertext hash tree and the optional ciphertext flat hash, rather than expecting its target to do some of that checking.
Rename immutable.download.Output to immutable.download.DecryptingOutput. An instance of DecryptingOutput can be passed to FileDownloader to use as the latter's target. Text pushed to the DecryptingOutput is decrypted and then pushed to *its* target.
DecryptingOutput satisfies the IConsumer interface, and if its target also satisfies IConsumer, then it forwards and pause/unpause signals to its producer (which is the FileDownloader).
This patch also changes some logging code to use the new logging mixin class.
Check integrity of a segment and decrypt the segment one block-sized buffer at a time instead of copying the buffers together into one segment-sized buffer (reduces peak memory usage, I think, and is probably a tad faster/less CPU, depending on your encoding parameters).
Refactor FileDownloader so that processing of segments and of tail-segment share as much code is possible.
FileDownloader and FileNode take caps as instances of URI (Python objects), not as strings.
This was somewhat sad; the assertion didn't say what path caused the
error, what went wrong. So... silently skip over things that are
neither dirs nor files.
pyflakes pointed out to me that I had committed some code that is untested, since it uses an undefined name. This patch exercises that code -- the validation of the ciphertext hash tree -- by corrupting some of the share files in a very specific way, and also fixes the bug.
This makes Uploader take an EncryptedUploadable object instead of an Uploadable object. I also changed it to return a verify cap instead of a tuple of the bits of data that one finds in a verify cap.
This will facilitate hooking together an Uploader and a Downloader to make a Repairer.
Also move offloaded.py into src/allmydata/immutable/.
Maybe it already got one of the corrupted hashes from a different server and it doesn't double-check that the hash from every server is correct. Or another problem. But in any case I'm marking this as TODO because an even better (more picky) verifier is less urgent than repairer.
New checker and verifier use the new download class. They are robust against various sorts of failures or corruption. They return detailed results explaining what they learned about your immutable files. Some grotesque sorts of corruption are not properly handled yet, and those ones are marked as TODO or commented-out in the unit tests.
There is also a repairer module in this patch with the beginnings of a repairer in it. That repairer is mostly just the interface to the outside world -- the core operation of actually reconstructing the missing data blocks and uploading them is not in there yet.
This patch also refactors the unit tests in test_immutable so that the handling of each kind of corruption is reported as passing or failing separately, can be separately TODO'ified, etc. The unit tests are also improved in various ways to require more of the code under test or to stop requiring unreasonable things of it. :-)
The code for validating the share hash tree and the block hash tree has been rewritten to make sure it handles all cases, to share metadata about the file (such as the share hash tree, block hash trees, and UEB) among different share downloads, and not to require hashes to be stored on the server unnecessarily, such as the roots of the block hash trees (not needed since they are also the leaves of the share hash tree), and the root of the share hash tree (not needed since it is also included in the UEB). It also passes the latest tests including handling corrupted shares well.
ValidatedReadBucketProxy takes a share_hash_tree argument to its constructor, which is a reference to a share hash tree shared by all ValidatedReadBucketProxies for that immutable file download.
ValidatedReadBucketProxy requires the block_size and share_size to be provided in its constructor, and it then uses those to compute the offsets and lengths of blocks when it needs them, instead of reading those values out of the share. The user of ValidatedReadBucketProxy therefore has to have first used a ValidatedExtendedURIProxy to compute those two values from the validated contents of the URI. This is pleasingly simplifies safety analysis: the client knows which span of bytes corresponds to a given block from the validated URI data, rather than from the unvalidated data stored on the storage server. It also simplifies unit testing of verifier/repairer, because now it doesn't care about the contents of the "share size" and "block size" fields in the share. It does not relieve the need for share data v2 layout, because we still need to store and retrieve the offsets of the fields which come after the share data, therefore we still need to use share data v2 with its 8-byte fields if we want to store share data larger than about 2^32.
Specify which subset of the block hashes and share hashes you need while downloading a particular share. In the future this will hopefully be used to fetch only a subset, for network efficiency, but currently all of them are fetched, regardless of which subset you specify.
ReadBucketProxy hides the question of whether it has "started" or not (sent a request to the server to get metadata) from its user.
Download is optimized to do as few roundtrips and as few requests as possible, hopefully speeding up download a bit.
This does raise the question of if there is any point to this test, since I apparently don't know what the answer *should* be, and whenever one of the buildbots fails then I redefine success.
But, I'm about to commit a bunch of patches to implement checker, verifier, and repairer as well as to refactor downloader, and I would really like to know if these patches *increase* the number of reads required even higher than it currently is.
Nathan Wilcox observed that the storage server can rely on the size of the share file combined with the count of leases to unambiguously identify the location of the leases. This means that it can hold any size share data, even though the field nominally used to hold the size of the share data is only 32 bits wide.
With this patch, the storage server still writes the "size of the share data" field (just in case the server gets downgraded to an earlier version which requires that field, or the share file gets moved to another server which is of an earlier vintage), but it doesn't use it. Also, with this patch, the server no longer rejects requests to write shares which are >= 2^32 bytes in size, and it no longer rejects attempts to read such shares.
This fixes http://allmydata.org/trac/tahoe/ticket/346 (increase share-size field to 8 bytes, remove 12GiB filesize limit), although there remains open a question of how clients know that a given server can handle large shares (by using the new versioning scheme, probably).
Note that share size is also limited by another factor -- how big of a file we can store on the local filesystem on the server. Currently allmydata.com typically uses ext3 and I think we typically have block size = 4 KiB, which means that the largest file is about 2 TiB. Also, the hard drives themselves are only 1 TB, so the largest share is definitely slightly less than 1 TB, which means (when K == 3), the largest file is less than 3 TB.
This patch also refactors the creation of new sharefiles so that only a single fopen() is used.
This patch also helps with the unit-testing of repairer, since formerly it was unclear what repairer should expect to find if the "share data size" field was corrupted (some corruptions would have no effect, others would cause failure to download). Now it is clear that repairer is not required to notice if this field is corrupted since it has no effect on download. :-)
There are a lot of different ways that a share could be corrupted, or that attempting to download it might fail. These tests attempt to exercise many of those ways and require the checker/verifier/repairer to handle each kind of failure well.
In an ancient version of directories, we needed a MAC on each entry. In modern times, the entire dirnode comes with a digital signature, so the MAC on each entry is redundant.
With this patch, we no longer check those MACs when reading directories, but we still produce them so that older readers will accept directories that we write.
I get confused about whether a given argument or return value is a uri-as-string or uri-as-object. This patch adds a lot of assertions that it is one or the other, and also changes CheckerResults to take objects not strings.
In the future, I hope that we generally use Python objects except when importing into or exporting from the Python interpreter e.g. over the wire, the UI, or a stored file.