Now upload or encode methods take a required argument named "convergence" which can be either None, indicating no convergent encryption at all, or a string, which is the "added secret" to be mixed in to the content hash key. If you want traditional convergent encryption behavior, set the added secret to be the empty string.
This patch also renames "content hash key" to "convergent encryption" in a argument names and variable names. (A different and larger renaming is needed in order to clarify that Tahoe supports immutable files which are not encrypted content-hash-key a.k.a. convergent encryption.)
This patch also changes a few unit tests to use non-convergent encryption, because it doesn't matter for what they are testing and non-convergent encryption is slightly faster.
while investigating fuse related stuff, I spent quite a while staring at
very cryptic explosions I got from idlib. it turns out that unicode
objects and str objects have .translate() methods with differing signatures.
to save anyone else the headache, this makes it very clear if you accidentally
try to pass a unicode object in to a2b() etc.
base62 encoding fits more information into alphanumeric chars while avoiding the troublesome non-alphanumeric chars of base64 encoding. In particular, this allows us to work around the ext3 "32,000 entries in a directory" limit while retaining the convenient property that the intermediate directory names are leading prefixes of the storage index file names.
this moves some of the code common to both windows and mac builds into the
allmydata module hierarchy, and cleans up the windows and mac build directories
to import the code from there.
use of twisted.python.util.sibpath to find files relative to modules doesn't
work when those modules are bundled into a library by py2exe. this provides
an alternative implementation (in allmydata.util.sibpath) which checks for
the existence of the file, and if it is not found, attempts to find it relative
to sys.executable instead.
* use new decentralized directories everywhere instead of old centralized directories
* provide UI to them through the web server
* provide UI to them through the CLI
* update unit tests to simulate decentralized mutable directories in order to test other components that rely on them
* remove the notion of a "vdrive server" and a client thereof
* remove the notion of a "public vdrive", which was a directory that was centrally published/subscribed automatically by the tahoe node (you can accomplish this manually by making a directory and posting the URL to it on your web site, for example)
* add a notion of "wait_for_numpeers" when you need to publish data to peers, which is how many peers should be attached before you start. The default is 1.
* add __repr__ for filesystem nodes (note: these reprs contain a few bits of the secret key!)
* fix a few bugs where we used to equate "mutable" with "not read-only". Nowadays all directories are mutable, but some might be read-only (to you).
* fix a few bugs where code wasn't aware of the new general-purpose metadata dict the comes with each filesystem edge
* sundry fixes to unit tests to adjust to the new directories, e.g. don't assume that every share on disk belongs to a chk file.
The URI typenames need revision, and only a few dirnode methods are
implemented. Filenodes are non-functional, but URI/key-management is in
place. There are a lot of classes with names like "NewDirectoryNode" that
will need to be rename once we decide what (if any) backwards compatibility
want to retain.
It causes a mysterious misbehavior in Python import which causes the previous patch to fail (the patch to not run trial tests if dependencies can't be imported)
Also include the encoder portion of Bob Ippolito's simplejson-1.7.1 as
allmydata.util.json_encoder . simplejson is distributed under a more liberal
license than Tahoe (looks to be modified BSD), so redistributing it should be ok.
The only SHA-1 hash that remains is used in the permutation of nodeids,
where we need to decide if we care about performance or long-term security.
I suspect that we could use a much weaker hash (and faster) hash for
this purpose. In the long run, we'll be doing thousands of such hashes
for each file uploaded or downloaded (one per known peer).
Actually of course iputil can't tell exactly how good they are, and a wise user
of iputil will try all of them. But you can't try all of them simultaneously,
so you might as well try the best ones first.
In various cases, including Merkle Trees, it is useful to tag and encode the inputs to your secure hashes to prevent security flaws due to ambiguous meanings of hash values.
Added metadata to the bucket store, which is used to hold the share number
(but the bucket doesn't know that, it just gets a string).
Modified the codec interfaces a bit.
Try to pass around URIs to/from download/upload instead of verifierids.
URI format is still in flux.
Change the current (primitive) file encoder to use a ReplicatingEncoder
because it provides ICodecEncoder. We will be moving to the (less primitive)
file encoder (currently in allmydata.encode_new) eventually, but for now
this change lets us test out PyRS or zooko's upcoming C-based RS codec in
something larger than a single unit test. This primitive file encoder only
uses a single segment, and has no merkle trees.
Also added allmydata.util.deferredutil for a DeferredList wrapper that
errbacks (but only when all component Deferreds have fired) if there were
any errors, which unfortunately is not a behavior available from the standard
DeferredList.