tahoe-lafs/src/allmydata/codec.py

76 lines
2.8 KiB
Python
Raw Normal View History

# -*- test-case-name: allmydata.test.test_encode_share -*-
from zope.interface import implements
2006-12-03 00:31:26 +00:00
from twisted.internet import defer
from allmydata.util import mathutil
2007-02-02 00:13:01 +00:00
from allmydata.util.assertutil import precondition
from allmydata.interfaces import ICodecEncoder, ICodecDecoder
import zfec
class CRSEncoder(object):
implements(ICodecEncoder)
2007-03-28 07:05:16 +00:00
ENCODER_TYPE = "crs"
def set_params(self, data_size, required_shares, max_shares):
assert required_shares <= max_shares
self.data_size = data_size
self.required_shares = required_shares
self.max_shares = max_shares
self.share_size = mathutil.div_ceil(data_size, required_shares)
self.last_share_padding = mathutil.pad_size(self.share_size, required_shares)
self.encoder = zfec.Encoder(required_shares, max_shares)
def get_encoder_type(self):
return self.ENCODER_TYPE
download: refactor handling of URI Extension Block and crypttext hash tree, simplify things Refactor into a class the logic of asking each server in turn until one of them gives an answer that validates. It is called ValidatedThingObtainer. Refactor the downloading and verification of the URI Extension Block into a class named ValidatedExtendedURIProxy. The new logic of validating UEBs is minimalist: it doesn't require the UEB to contain any unncessary information, but of course it still accepts such information for backwards compatibility (so that this new download code is able to download files uploaded with old, and for that matter with current, upload code). The new logic of validating UEBs follows the practice of doing all validation up front. This practice advises one to isolate the validation of incoming data into one place, so that all of the rest of the code can assume only valid data. If any redundant information is present in the UEB+URI, the new code cross-checks and asserts that it is all fully consistent. This closes some issues where the uploader could have uploaded inconsistent redundant data, which would probably have caused the old downloader to simply reject that download after getting a Python exception, but perhaps could have caused greater harm to the old downloader. I removed the notion of selecting an erasure codec from codec.py based on the string that was passed in the UEB. Currently "crs" is the only such string that works, so "_assert(codec_name == 'crs')" is simpler and more explicit. This is also in keeping with the "validate up front" strategy -- now if someone sets a different string than "crs" in their UEB, the downloader will reject the download in the "validate this UEB" function instead of in a separate "select the codec instance" function. I removed the code to check plaintext hashes and plaintext Merkle Trees. Uploaders do not produce this information any more (since it potentially exposes confidential information about the file), and the unit tests for it were disabled. The downloader before this patch would check that plaintext hash or plaintext merkle tree if they were present, but not complain if they were absent. The new downloader in this patch complains if they are present and doesn't check them. (We might in the future re-introduce such hashes over the plaintext, but encrypt the hashes which are stored in the UEB to preserve confidentiality. This would be a double- check on the correctness of our own source code -- the current Merkle Tree over the ciphertext is already sufficient to guarantee the integrity of the download unless there is a bug in our Merkle Tree or AES implementation.) This patch increases the lines-of-code count by 8 (from 17,770 to 17,778), and reduces the uncovered-by-tests lines-of-code count by 24 (from 1408 to 1384). Those numbers would be more meaningful if we omitted src/allmydata/util/ from the test-coverage statistics.
2008-12-05 15:17:54 +00:00
def get_params(self):
return (self.data_size, self.required_shares, self.max_shares)
def get_serialized_params(self):
2007-03-28 07:05:16 +00:00
return "%d-%d-%d" % (self.data_size, self.required_shares,
self.max_shares)
def get_block_size(self):
return self.share_size
def encode(self, inshares, desired_share_ids=None):
precondition(desired_share_ids is None or len(desired_share_ids) <= self.max_shares, desired_share_ids, self.max_shares)
if desired_share_ids is None:
desired_share_ids = range(self.max_shares)
for inshare in inshares:
assert len(inshare) == self.share_size, (len(inshare), self.share_size, self.data_size, self.required_shares)
shares = self.encoder.encode(inshares, desired_share_ids)
return defer.succeed((shares, desired_share_ids))
class CRSDecoder(object):
implements(ICodecDecoder)
download: refactor handling of URI Extension Block and crypttext hash tree, simplify things Refactor into a class the logic of asking each server in turn until one of them gives an answer that validates. It is called ValidatedThingObtainer. Refactor the downloading and verification of the URI Extension Block into a class named ValidatedExtendedURIProxy. The new logic of validating UEBs is minimalist: it doesn't require the UEB to contain any unncessary information, but of course it still accepts such information for backwards compatibility (so that this new download code is able to download files uploaded with old, and for that matter with current, upload code). The new logic of validating UEBs follows the practice of doing all validation up front. This practice advises one to isolate the validation of incoming data into one place, so that all of the rest of the code can assume only valid data. If any redundant information is present in the UEB+URI, the new code cross-checks and asserts that it is all fully consistent. This closes some issues where the uploader could have uploaded inconsistent redundant data, which would probably have caused the old downloader to simply reject that download after getting a Python exception, but perhaps could have caused greater harm to the old downloader. I removed the notion of selecting an erasure codec from codec.py based on the string that was passed in the UEB. Currently "crs" is the only such string that works, so "_assert(codec_name == 'crs')" is simpler and more explicit. This is also in keeping with the "validate up front" strategy -- now if someone sets a different string than "crs" in their UEB, the downloader will reject the download in the "validate this UEB" function instead of in a separate "select the codec instance" function. I removed the code to check plaintext hashes and plaintext Merkle Trees. Uploaders do not produce this information any more (since it potentially exposes confidential information about the file), and the unit tests for it were disabled. The downloader before this patch would check that plaintext hash or plaintext merkle tree if they were present, but not complain if they were absent. The new downloader in this patch complains if they are present and doesn't check them. (We might in the future re-introduce such hashes over the plaintext, but encrypt the hashes which are stored in the UEB to preserve confidentiality. This would be a double- check on the correctness of our own source code -- the current Merkle Tree over the ciphertext is already sufficient to guarantee the integrity of the download unless there is a bug in our Merkle Tree or AES implementation.) This patch increases the lines-of-code count by 8 (from 17,770 to 17,778), and reduces the uncovered-by-tests lines-of-code count by 24 (from 1408 to 1384). Those numbers would be more meaningful if we omitted src/allmydata/util/ from the test-coverage statistics.
2008-12-05 15:17:54 +00:00
def set_params(self, data_size, required_shares, max_shares):
self.data_size = data_size
self.required_shares = required_shares
self.max_shares = max_shares
self.chunk_size = self.required_shares
self.num_chunks = mathutil.div_ceil(self.data_size, self.chunk_size)
self.share_size = self.num_chunks
self.decoder = zfec.Decoder(self.required_shares, self.max_shares)
def get_needed_shares(self):
return self.required_shares
def decode(self, some_shares, their_shareids):
precondition(len(some_shares) == len(their_shareids),
len(some_shares), len(their_shareids))
precondition(len(some_shares) == self.required_shares,
len(some_shares), self.required_shares)
data = self.decoder.decode(some_shares,
[int(s) for s in their_shareids])
return defer.succeed(data)
download: refactor handling of URI Extension Block and crypttext hash tree, simplify things Refactor into a class the logic of asking each server in turn until one of them gives an answer that validates. It is called ValidatedThingObtainer. Refactor the downloading and verification of the URI Extension Block into a class named ValidatedExtendedURIProxy. The new logic of validating UEBs is minimalist: it doesn't require the UEB to contain any unncessary information, but of course it still accepts such information for backwards compatibility (so that this new download code is able to download files uploaded with old, and for that matter with current, upload code). The new logic of validating UEBs follows the practice of doing all validation up front. This practice advises one to isolate the validation of incoming data into one place, so that all of the rest of the code can assume only valid data. If any redundant information is present in the UEB+URI, the new code cross-checks and asserts that it is all fully consistent. This closes some issues where the uploader could have uploaded inconsistent redundant data, which would probably have caused the old downloader to simply reject that download after getting a Python exception, but perhaps could have caused greater harm to the old downloader. I removed the notion of selecting an erasure codec from codec.py based on the string that was passed in the UEB. Currently "crs" is the only such string that works, so "_assert(codec_name == 'crs')" is simpler and more explicit. This is also in keeping with the "validate up front" strategy -- now if someone sets a different string than "crs" in their UEB, the downloader will reject the download in the "validate this UEB" function instead of in a separate "select the codec instance" function. I removed the code to check plaintext hashes and plaintext Merkle Trees. Uploaders do not produce this information any more (since it potentially exposes confidential information about the file), and the unit tests for it were disabled. The downloader before this patch would check that plaintext hash or plaintext merkle tree if they were present, but not complain if they were absent. The new downloader in this patch complains if they are present and doesn't check them. (We might in the future re-introduce such hashes over the plaintext, but encrypt the hashes which are stored in the UEB to preserve confidentiality. This would be a double- check on the correctness of our own source code -- the current Merkle Tree over the ciphertext is already sufficient to guarantee the integrity of the download unless there is a bug in our Merkle Tree or AES implementation.) This patch increases the lines-of-code count by 8 (from 17,770 to 17,778), and reduces the uncovered-by-tests lines-of-code count by 24 (from 1408 to 1384). Those numbers would be more meaningful if we omitted src/allmydata/util/ from the test-coverage statistics.
2008-12-05 15:17:54 +00:00
def parse_params(serializedparams):
pieces = serializedparams.split("-")
return int(pieces[0]), int(pieces[1]), int(pieces[2])