mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-04-13 13:53:07 +00:00
storage: remove the storage server's "remote_cancel_lease" function
We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file. fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
This commit is contained in:
parent
65de17245d
commit
5476f67dc1
@ -18,7 +18,7 @@ making room for other shares. Tahoe currently uses a garbage collection
|
||||
one or more "leases", which are managed by clients who want the
|
||||
file/directory to be retained. The storage server accepts each share for a
|
||||
pre-defined period of time, and is allowed to delete the share if all of the
|
||||
leases are cancelled or allowed to expire.
|
||||
leases expire.
|
||||
|
||||
Garbage collection is not enabled by default: storage servers will not delete
|
||||
shares without being explicitly configured to do so. When GC is enabled,
|
||||
@ -266,18 +266,12 @@ replace the previous set.
|
||||
The GC mechanism is also not immediate: a client which deletes a file will
|
||||
nevertheless be consuming extra disk space (and might be charged or otherwise
|
||||
held accountable for it) until the ex-file's leases finally expire on their
|
||||
own. If the client is certain that they've removed their last reference to
|
||||
the file, they could accelerate the GC process by cancelling their lease. The
|
||||
current storage server API provides a method to cancel a lease, but the
|
||||
client must be careful to coordinate with anyone else who might be
|
||||
referencing the same lease (perhaps a second directory in the same virtual
|
||||
drive), otherwise they might accidentally remove a lease that should have
|
||||
been retained.
|
||||
own.
|
||||
|
||||
In the current release, these leases are each associated with a single "node
|
||||
secret" (stored in $BASEDIR/private/secret), which is used to generate
|
||||
renewal- and cancel- secrets for each lease. Two nodes with different secrets
|
||||
will produce separate leases, and will not be able to renew or cancel each
|
||||
renewal-secrets for each lease. Two nodes with different secrets
|
||||
will produce separate leases, and will not be able to renew each
|
||||
others' leases.
|
||||
|
||||
Once the Accounting project is in place, leases will be scoped by a
|
||||
|
@ -148,21 +148,6 @@ class RIStorageServer(RemoteInterface):
|
||||
"""
|
||||
return Any()
|
||||
|
||||
def cancel_lease(storage_index=StorageIndex,
|
||||
cancel_secret=LeaseCancelSecret):
|
||||
"""
|
||||
Cancel the lease on a given bucket. If this was the last lease on the
|
||||
bucket, the bucket will be deleted. If there is no bucket for the
|
||||
given storage_index, IndexError will be raised.
|
||||
|
||||
For mutable shares, if the given cancel_secret does not match an
|
||||
existing lease, IndexError will be raised with a note listing the
|
||||
server-nodeids on the existing leases, so leases on migrated shares
|
||||
can be renewed or cancelled. For immutable shares, IndexError
|
||||
(without the note) will be raised.
|
||||
"""
|
||||
return Any()
|
||||
|
||||
def get_buckets(storage_index=StorageIndex):
|
||||
return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
|
||||
|
||||
|
@ -345,34 +345,6 @@ class StorageServer(service.MultiService, Referenceable):
|
||||
if not found_buckets:
|
||||
raise IndexError("no such lease to renew")
|
||||
|
||||
def remote_cancel_lease(self, storage_index, cancel_secret):
|
||||
start = time.time()
|
||||
self.count("cancel")
|
||||
|
||||
total_space_freed = 0
|
||||
found_buckets = False
|
||||
for sf in self._iter_share_files(storage_index):
|
||||
# note: if we can't find a lease on one share, we won't bother
|
||||
# looking in the others. Unless something broke internally
|
||||
# (perhaps we ran out of disk space while adding a lease), the
|
||||
# leases on all shares will be identical.
|
||||
found_buckets = True
|
||||
# this raises IndexError if the lease wasn't present XXXX
|
||||
total_space_freed += sf.cancel_lease(cancel_secret)
|
||||
|
||||
if found_buckets:
|
||||
storagedir = os.path.join(self.sharedir,
|
||||
storage_index_to_dir(storage_index))
|
||||
if not os.listdir(storagedir):
|
||||
os.rmdir(storagedir)
|
||||
|
||||
if self.stats_provider:
|
||||
self.stats_provider.count('storage_server.bytes_freed',
|
||||
total_space_freed)
|
||||
self.add_latency("cancel", time.time() - start)
|
||||
if not found_buckets:
|
||||
raise IndexError("no such storage index")
|
||||
|
||||
def bucket_writer_closed(self, bw, consumed_size):
|
||||
if self.stats_provider:
|
||||
self.stats_provider.count('storage_server.bytes_added', consumed_size)
|
||||
|
Loading…
x
Reference in New Issue
Block a user