mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2024-12-19 13:07:56 +00:00
65a1040fe8
New list is tahoe-dev@lists.tahoe-lafs.org, list info page is at https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev, and list archives are now at https://lists.tahoe-lafs.org/pipermail/tahoe-dev/. Sadly message numbers in list archive seem to have changed, so updating references to list archive is not as simple as prefixing `list.`
275 lines
12 KiB
Plaintext
275 lines
12 KiB
Plaintext
= Known Issues =
|
|
|
|
Below is a list of known issues in older releases of Tahoe-LAFS, and how to
|
|
manage them. The current version of this file can be found at
|
|
|
|
https://tahoe-lafs.org/source/tahoe/trunk/docs/historical/historical_known_issues.txt
|
|
|
|
Issues in newer releases of Tahoe-LAFS can be found at:
|
|
|
|
https://tahoe-lafs.org/source/tahoe/trunk/docs/known_issues.rst
|
|
|
|
== issues in Tahoe v1.8.2, released 30-Jan-2011 ==
|
|
|
|
Unauthorized deletion of an immutable file by its storage index
|
|
---------------------------------------------------------------
|
|
|
|
Due to a flaw in the Tahoe-LAFS storage server software in v1.3.0 through
|
|
v1.8.2, a person who knows the "storage index" that identifies an immutable
|
|
file can cause the server to delete its shares of that file.
|
|
|
|
If an attacker can cause enough shares to be deleted from enough storage
|
|
servers, this deletes the file.
|
|
|
|
This vulnerability does not enable anyone to read file contents without
|
|
authorization (confidentiality), nor to change the contents of a file
|
|
(integrity).
|
|
|
|
A person could learn the storage index of a file in several ways:
|
|
|
|
1. By being granted the authority to read the immutable file—i.e. by being
|
|
granted a read capability to the file. They can determine the file's
|
|
storage index from its read capability.
|
|
|
|
2. By being granted a verify capability to the file. They can determine the
|
|
file's storage index from its verify capability. This case probably
|
|
doesn't happen often because users typically don't share verify caps.
|
|
|
|
3. By operating a storage server, and receiving a request from a client that
|
|
has a read cap or a verify cap. If the client attempts to upload,
|
|
download, or verify the file with their storage server, even if it doesn't
|
|
actually have the file, then they can learn the storage index of the file.
|
|
|
|
4. By gaining read access to an existing storage server's local filesystem,
|
|
and inspecting the directory structure that it stores its shares in. They
|
|
can thus learn the storage indexes of all files that the server is holding
|
|
at least one share of. Normally only the operator of an existing storage
|
|
server would be able to inspect its local filesystem, so this requires
|
|
either being such an operator of an existing storage server, or somehow
|
|
gaining the ability to inspect the local filesystem of an existing storage
|
|
server.
|
|
|
|
*how to manage it*
|
|
|
|
Tahoe-LAFS version v1.8.3 or newer (except v1.9a1) no longer has this flaw;
|
|
if you upgrade a storage server to a fixed release then that server is no
|
|
longer vulnerable to this problem.
|
|
|
|
Note that the issue is local to each storage server independently of other
|
|
storage servers—when you upgrade a storage server then that particular
|
|
storage server can no longer be tricked into deleting its shares of the
|
|
target file.
|
|
|
|
If you can't immediately upgrade your storage server to a version of
|
|
Tahoe-LAFS that eliminates this vulnerability, then you could temporarily
|
|
shut down your storage server. This would of course negatively impact
|
|
availability—clients would not be able to upload or download shares to that
|
|
particular storage server while it was shut down—but it would protect the
|
|
shares already stored on that server from being deleted as long as the server
|
|
is shut down.
|
|
|
|
If the servers that store shares of your file are running a version of
|
|
Tahoe-LAFS with this vulnerability, then you should think about whether
|
|
someone can learn the storage indexes of your files by one of the methods
|
|
described above. A person can not exploit this vulnerability unless they have
|
|
received a read cap or verify cap, or they control a storage server that has
|
|
been queried about this file by a client that has a read cap or a verify cap.
|
|
|
|
Tahoe-LAFS does not currently have a mechanism to limit which storage servers
|
|
can connect to your grid, but it does have a way to see which storage servers
|
|
have been connected to the grid. The Introducer's front page in the Web User
|
|
Interface has a list of all storage servers that the Introducer has ever seen
|
|
and the first time and the most recent time that it saw them. Each Tahoe-LAFS
|
|
gateway maintains a similar list on its front page in its Web User Interface,
|
|
showing all of the storage servers that it learned about from the Introducer,
|
|
when it first connected to that storage server, and when it most recently
|
|
connected to that storage server. These lists are stored in memory and are
|
|
reset to empty when the process is restarted.
|
|
|
|
See ticket `#1528`_ for technical details.
|
|
|
|
.. _#1528: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1528
|
|
|
|
|
|
|
|
== issues in Tahoe v1.1.0, released 2008-06-11 ==
|
|
|
|
(Tahoe v1.1.0 was superceded by v1.2.0 which was released 2008-07-21.)
|
|
|
|
=== more than one file can match an immutable file cap ===
|
|
|
|
In Tahoe v1.0 and v1.1, a flaw in the cryptographic integrity check
|
|
makes it possible for the original uploader of an immutable file to
|
|
produce more than one immutable file matching the same capability, so
|
|
that different downloads using the same capability could result in
|
|
different files. This flaw can be exploited only by the original
|
|
uploader of an immutable file, which means that it is not a severe
|
|
vulnerability: you can still rely on the integrity check to make sure
|
|
that the file you download with a given capability is a file that the
|
|
original uploader intended. The only issue is that you can't assume
|
|
that every time you use the same capability to download a file you'll
|
|
get the same file.
|
|
|
|
==== how to manage it ====
|
|
|
|
This was fixed in Tahoe v1.2.0, released 2008-07-21, under ticket
|
|
#491. Upgrade to that release of Tahoe and then you can rely on the
|
|
property that there is only one file that you can download using a
|
|
given capability. If you are still using Tahoe v1.0 or v1.1, then
|
|
remember that the original uploader could produce multiple files that
|
|
match the same capability, so for example if someone gives you a
|
|
capability, and you use it to download a file, and you give that
|
|
capability to your friend, and he uses it to download a file, you and
|
|
your friend could get different files.
|
|
|
|
|
|
=== server out of space when writing mutable file ===
|
|
|
|
If a v1.0 or v1.1 storage server runs out of disk space or is
|
|
otherwise unable to write to its local filesystem, then problems can
|
|
ensue. For immutable files, this will not lead to any problem (the
|
|
attempt to upload that share to that server will fail, the partially
|
|
uploaded share will be deleted from the storage server's "incoming
|
|
shares" directory, and the client will move on to using another
|
|
storage server instead).
|
|
|
|
If the write was an attempt to modify an existing mutable file,
|
|
however, a problem will result: when the attempt to write the new
|
|
share fails (e.g. due to insufficient disk space), then it will be
|
|
aborted and the old share will be left in place. If enough such old
|
|
shares are left, then a subsequent read may get those old shares and
|
|
see the file in its earlier state, which is a "rollback" failure.
|
|
With the default parameters (3-of-10), six old shares will be enough
|
|
to potentially lead to a rollback failure.
|
|
|
|
==== how to manage it ====
|
|
|
|
Make sure your Tahoe storage servers don't run out of disk space.
|
|
This means refusing storage requests before the disk fills up. There
|
|
are a couple of ways to do that with v1.1.
|
|
|
|
First, there is a configuration option named "sizelimit" which will
|
|
cause the storage server to do a "du" style recursive examination of
|
|
its directories at startup, and then if the sum of the size of files
|
|
found therein is greater than the "sizelimit" number, it will reject
|
|
requests by clients to write new immutable shares.
|
|
|
|
However, that can take a long time (something on the order of a minute
|
|
of examination of the filesystem for each 10 GB of data stored in the
|
|
Tahoe server), and the Tahoe server will be unavailable to clients
|
|
during that time.
|
|
|
|
Another option is to set the "readonly_storage" configuration option
|
|
on the storage server before startup. This will cause the storage
|
|
server to reject all requests to upload new immutable shares.
|
|
|
|
Note that neither of these configurations affect mutable shares: even
|
|
if sizelimit is configured and the storage server currently has
|
|
greater space used than allowed, or even if readonly_storage is
|
|
configured, servers will continue to accept new mutable shares and
|
|
will continue to accept requests to overwrite existing mutable shares.
|
|
|
|
Mutable files are typically used only for directories, and are usually
|
|
much smaller than immutable files, so if you use one of these
|
|
configurations to stop the influx of immutable files while there is
|
|
still sufficient disk space to receive an influx of (much smaller)
|
|
mutable files, you may be able to avoid the potential for "rollback"
|
|
failure.
|
|
|
|
A future version of Tahoe will include a fix for this issue. Here is
|
|
[https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2008-May/000628.html the
|
|
mailing list discussion] about how that future version will work.
|
|
|
|
|
|
=== pyOpenSSL/Twisted defect causes false alarms in tests ===
|
|
|
|
The combination of Twisted v8.0 or Twisted v8.1 with pyOpenSSL v0.7
|
|
causes the Tahoe v1.1 unit tests to fail, even though the behavior of
|
|
Tahoe itself which is being tested is correct.
|
|
|
|
==== how to manage it ====
|
|
|
|
If you are using Twisted v8.0 or Twisted v8.1 and pyOpenSSL v0.7, then
|
|
please ignore ERROR "Reactor was unclean" in test_system and
|
|
test_introducer. Upgrading to a newer version of Twisted or pyOpenSSL
|
|
will cause those false alarms to stop happening (as will downgrading
|
|
to an older version of either of those packages).
|
|
|
|
== issues in Tahoe v1.0.0, released 2008-03-25 ==
|
|
|
|
(Tahoe v1.0 was superceded by v1.1 which was released 2008-06-11.)
|
|
|
|
=== server out of space when writing mutable file ===
|
|
|
|
In addition to the problems caused by insufficient disk space
|
|
described above, v1.0 clients which are writing mutable files when the
|
|
servers fail to write to their filesystem are likely to think the
|
|
write succeeded, when it in fact failed. This can cause data loss.
|
|
|
|
==== how to manage it ====
|
|
|
|
Upgrade client to v1.1, or make sure that servers are always able to
|
|
write to their local filesystem (including that there is space
|
|
available) as described in "server out of space when writing mutable
|
|
file" above.
|
|
|
|
|
|
=== server out of space when writing immutable file ===
|
|
|
|
Tahoe v1.0 clients are using v1.0 servers which are unable to write to
|
|
their filesystem during an immutable upload will correctly detect the
|
|
first failure, but if they retry the upload without restarting the
|
|
client, or if another client attempts to upload the same file, the
|
|
second upload may appear to succeed when it hasn't, which can lead to
|
|
data loss.
|
|
|
|
==== how to manage it ====
|
|
|
|
Upgrading either or both of the client and the server to v1.1 will fix
|
|
this issue. Also it can be avoided by ensuring that the servers are
|
|
always able to write to their local filesystem (including that there
|
|
is space available) as described in "server out of space when writing
|
|
mutable file" above.
|
|
|
|
|
|
=== large directories or mutable files of certain sizes ===
|
|
|
|
If a client attempts to upload a large mutable file with a size
|
|
greater than about 3,139,000 and less than or equal to 3,500,000 bytes
|
|
then it will fail but appear to succeed, which can lead to data loss.
|
|
|
|
(Mutable files larger than 3,500,000 are refused outright). The
|
|
symptom of the failure is very high memory usage (3 GB of memory) and
|
|
100% CPU for about 5 minutes, before it appears to succeed, although
|
|
it hasn't.
|
|
|
|
Directories are stored in mutable files, and a directory of
|
|
approximately 9000 entries may fall into this range of mutable file
|
|
sizes (depending on the size of the filenames or other metadata
|
|
associated with the entries).
|
|
|
|
==== how to manage it ====
|
|
|
|
This was fixed in v1.1, under ticket #379. If the client is upgraded
|
|
to v1.1, then it will fail cleanly instead of falsely appearing to
|
|
succeed when it tries to write a file whose size is in this range. If
|
|
the server is also upgraded to v1.1, then writes of mutable files
|
|
whose size is in this range will succeed. (If the server is upgraded
|
|
to v1.1 but the client is still v1.0 then the client will still suffer
|
|
this failure.)
|
|
|
|
|
|
=== uploading files greater than 12 GiB ===
|
|
|
|
If a Tahoe v1.0 client uploads a file greater than 12 GiB in size, the file will
|
|
be silently corrupted so that it is not retrievable, but the client will think
|
|
that it succeeded. This is a "data loss" failure.
|
|
|
|
==== how to manage it ====
|
|
|
|
Don't upload files larger than 12 GiB. If you have previously uploaded files of
|
|
that size, assume that they have been corrupted and are not retrievable from the
|
|
Tahoe storage grid. Tahoe v1.1 clients will refuse to upload files larger than
|
|
12 GiB with a clean failure. A future release of Tahoe will remove this
|
|
limitation so that larger files can be uploaded.
|