This walks slowly through all shares, examining their leases, deciding which
are still valid and which have expired. Once enabled, it will then remove the
expired leases, and delete shares which no longer have any valid leases. Note
that there is not yet a tahoe.cfg option to enable lease-deletion: the
current code is read-only. A subsequent patch will add a tahoe.cfg knob to
control this, as well as docs. Some other minor items included in this patch:
tahoe debug dump-share has a new --leases-only flag
storage sharefile/leaseinfo code is cleaned up
storage web status page (/storage) has more info, more tests coverage
space-left measurement on OS-X should be more accurate (it was off by 2048x)
(use stat .f_frsize instead of f_bsize)
This reverses some, but not all, of the changes that were committed in the following set of patches.
rolling back:
Sun Jan 18 09:54:30 MST 2009 toby.murray
* add 'web.ambient_upload_authority' as a paramater to tahoe.cfg
M ./src/allmydata/client.py -1 +3
M ./src/allmydata/test/common.py -7 +9
A ./src/allmydata/test/test_ambient_upload_authority.py
M ./src/allmydata/web/root.py +12
M ./src/allmydata/webish.py -1 +4
Sun Jan 18 09:56:08 MST 2009 zooko@zooko.com
* trivial: whitespace
I ran emacs's "M-x whitespace-cleanup" on the files that Toby's recent patch had touched that had trailing whitespace on some lines.
M ./src/allmydata/test/test_ambient_upload_authority.py -9 +8
M ./src/allmydata/web/root.py -2 +1
M ./src/allmydata/webish.py -2 +1
Mon Jan 19 14:16:19 MST 2009 zooko@zooko.com
* trivial: remove unused import noticed by pyflakes
M ./src/allmydata/test/test_ambient_upload_authority.py -1
Mon Jan 19 21:38:35 MST 2009 toby.murray
* doc: describe web.ambient_upload_authority
M ./docs/configuration.txt +14
M ./docs/frontends/webapi.txt +11
Mon Jan 19 21:38:57 MST 2009 zooko@zooko.com
* doc: add Toby Murray to the CREDITS
M ./CREDITS +4
Because Brian argues that the file contains a description of the wui as well as of the wapi, and because the name "webapi.txt" might be more obvious to the untrained eye.
It used to be a map from shnum to a string saying "placed this share on XYZ server". The new definition is more in keeping with the "sharemap" object that results from immutable file checking and repair, and it is more useful to the repairer, which is a consumer of immutable upload results.
Tahoe webapi was failing on HTTP request containing a partial Range header.
This change allows movies players like mplayer to seek in movie files stored in
tahoe.
Associated tests for GET and HEAD methods are also included
Refactor into a class the logic of asking each server in turn until one of them gives an answer
that validates. It is called ValidatedThingObtainer.
Refactor the downloading and verification of the URI Extension Block into a class named
ValidatedExtendedURIProxy.
The new logic of validating UEBs is minimalist: it doesn't require the UEB to contain any
unncessary information, but of course it still accepts such information for backwards
compatibility (so that this new download code is able to download files uploaded with old, and
for that matter with current, upload code).
The new logic of validating UEBs follows the practice of doing all validation up front. This
practice advises one to isolate the validation of incoming data into one place, so that all of
the rest of the code can assume only valid data.
If any redundant information is present in the UEB+URI, the new code cross-checks and asserts
that it is all fully consistent. This closes some issues where the uploader could have
uploaded inconsistent redundant data, which would probably have caused the old downloader to
simply reject that download after getting a Python exception, but perhaps could have caused
greater harm to the old downloader.
I removed the notion of selecting an erasure codec from codec.py based on the string that was
passed in the UEB. Currently "crs" is the only such string that works, so
"_assert(codec_name == 'crs')" is simpler and more explicit. This is also in keeping with the
"validate up front" strategy -- now if someone sets a different string than "crs" in their UEB,
the downloader will reject the download in the "validate this UEB" function instead of in a
separate "select the codec instance" function.
I removed the code to check plaintext hashes and plaintext Merkle Trees. Uploaders do not
produce this information any more (since it potentially exposes confidential information about
the file), and the unit tests for it were disabled. The downloader before this patch would
check that plaintext hash or plaintext merkle tree if they were present, but not complain if
they were absent. The new downloader in this patch complains if they are present and doesn't
check them. (We might in the future re-introduce such hashes over the plaintext, but encrypt
the hashes which are stored in the UEB to preserve confidentiality. This would be a double-
check on the correctness of our own source code -- the current Merkle Tree over the ciphertext
is already sufficient to guarantee the integrity of the download unless there is a bug in our
Merkle Tree or AES implementation.)
This patch increases the lines-of-code count by 8 (from 17,770 to 17,778), and reduces the
uncovered-by-tests lines-of-code count by 24 (from 1408 to 1384). Those numbers would be more
meaningful if we omitted src/allmydata/util/ from the test-coverage statistics.
to make the internal ones use binary strings (nodeid, storage index) and
the web/JSON ones use base32-encoded strings. The immutable verifier is
still incomplete (it returns imaginary healty results).
Removed the Checker service, removed checker results storage (both in-memory
and the tiny stub of sqlite-based storage). Added ICheckable, all
check/verify is now done by calling the check() method on filenodes and
dirnodes (immutable files, literal files, mutable files, and directory
instances).
Checker results are returned in a Results instance, with an html() method for
display. Checker results have been temporarily removed from the wui directory
listing until we make some other fixes.
Also fixed client.create_node_from_uri() to create LiteralFileNodes properly,
since they have different checking behavior. Previously we were creating full
FileNodes with LIT uris inside, which were downloadable but not checkable.
when rendering a directory in webish, if the url did not correctly include
a trailing slash '/' then the browser will be redirected to a url which
does. this causes any relative links (e.g. the 'other representations'
which are links to files relative to the current directory) to be correctly
intrpreted as relative to the directory in question by the browser.
uses twisted's addSlash mechanism, which is designed for this purpose :-)
Jake Edge tried Tahoe out and didn't notice the /status page. Hopefully with this new organization people like he will see that link more easily. This also addresses drewp's suggestion that the controls appear above the list of servers instead of below. (I think that was his suggestion.) I also reordered the controls.
not status instances. Fix this. The symptom was that following a link like
'up-123' that referred to an old operation (no longer in memory) while an
upload was active would get an ugly traceback instead of a "no such resource"
message.
Now upload or encode methods take a required argument named "convergence" which can be either None, indicating no convergent encryption at all, or a string, which is the "added secret" to be mixed in to the content hash key. If you want traditional convergent encryption behavior, set the added secret to be the empty string.
This patch also renames "content hash key" to "convergent encryption" in a argument names and variable names. (A different and larger renaming is needed in order to clarify that Tahoe supports immutable files which are not encrypted content-hash-key a.k.a. convergent encryption.)
This patch also changes a few unit tests to use non-convergent encryption, because it doesn't matter for what they are testing and non-convergent encryption is slightly faster.
using sibpath to find web template files relative to source code is functional
when running from source environments, but not especially flexible when running
from bundled built environments. the more 'orthodox' mechanism, pkg_resources,
in theory at least, knows how to find resource files in various environments.
this makes the 'web' directory in allmydata into an actual allmydata.web module
(since pkg_resources looks for files relative to a named module, and that module
must be importable) and uses pkg_resources.resource_filename to find the files
therein.
* rename my_private_dir.cap to root_dir.cap
* move it into the private subdir
* change the cmdline argument "--root-uri=[private]" to "--dir-uri=[root]"
Unfortunately although it passes the unit tests, it doesn't work, because the unit tests and the implementation use the "encode params into URL" technique but the button uses the "encode params into request body" technique.
* use new decentralized directories everywhere instead of old centralized directories
* provide UI to them through the web server
* provide UI to them through the CLI
* update unit tests to simulate decentralized mutable directories in order to test other components that rely on them
* remove the notion of a "vdrive server" and a client thereof
* remove the notion of a "public vdrive", which was a directory that was centrally published/subscribed automatically by the tahoe node (you can accomplish this manually by making a directory and posting the URL to it on your web site, for example)
* add a notion of "wait_for_numpeers" when you need to publish data to peers, which is how many peers should be attached before you start. The default is 1.
* add __repr__ for filesystem nodes (note: these reprs contain a few bits of the secret key!)
* fix a few bugs where we used to equate "mutable" with "not read-only". Nowadays all directories are mutable, but some might be read-only (to you).
* fix a few bugs where code wasn't aware of the new general-purpose metadata dict the comes with each filesystem edge
* sundry fixes to unit tests to adjust to the new directories, e.g. don't assume that every share on disk belongs to a chk file.
alongside the 'del' button is now presented a 'rename' button, which takes
the user to a new page, the 't=rename-form' page, which asks ther user for
the new name of the child and ultimately submits a POST request to the dir
for 't=rename' to perform the actual rename i.e. an attach followed by a
delete of children.