mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-04-10 12:20:02 +00:00
Merge remote-tracking branch 'origin/master' into 3784-istorageserver-test-suite
This commit is contained in:
commit
b7ae9a675f
@ -271,6 +271,11 @@ jobs:
|
||||
# in the project source checkout.
|
||||
path: "/tmp/project/_trial_temp/test.log"
|
||||
|
||||
- store_artifacts: &STORE_ELIOT_LOG
|
||||
# Despite passing --workdir /tmp to tox above, it still runs trial
|
||||
# in the project source checkout.
|
||||
path: "/tmp/project/eliot.log"
|
||||
|
||||
- store_artifacts: &STORE_OTHER_ARTIFACTS
|
||||
# Store any other artifacts, too. This is handy to allow other jobs
|
||||
# sharing most of the definition of this one to be able to
|
||||
@ -413,6 +418,7 @@ jobs:
|
||||
- run: *RUN_TESTS
|
||||
- store_test_results: *STORE_TEST_RESULTS
|
||||
- store_artifacts: *STORE_TEST_LOG
|
||||
- store_artifacts: *STORE_ELIOT_LOG
|
||||
- store_artifacts: *STORE_OTHER_ARTIFACTS
|
||||
- run: *SUBMIT_COVERAGE
|
||||
|
||||
|
13
.github/workflows/ci.yml
vendored
13
.github/workflows/ci.yml
vendored
@ -76,13 +76,18 @@ jobs:
|
||||
- name: Run tox for corresponding Python version
|
||||
run: python -m tox
|
||||
|
||||
- name: Upload eliot.log in case of failure
|
||||
- name: Upload eliot.log
|
||||
uses: actions/upload-artifact@v1
|
||||
if: failure()
|
||||
with:
|
||||
name: eliot.log
|
||||
path: eliot.log
|
||||
|
||||
- name: Upload trial log
|
||||
uses: actions/upload-artifact@v1
|
||||
with:
|
||||
name: test.log
|
||||
path: _trial_temp/test.log
|
||||
|
||||
# Upload this job's coverage data to Coveralls. While there is a GitHub
|
||||
# Action for this, as of Jan 2021 it does not support Python coverage
|
||||
# files - only lcov files. Therefore, we use coveralls-python, the
|
||||
@ -136,7 +141,7 @@ jobs:
|
||||
# See notes about parallel builds on GitHub Actions at
|
||||
# https://coveralls-python.readthedocs.io/en/latest/usage/configuration.html
|
||||
finish-coverage-report:
|
||||
needs:
|
||||
needs:
|
||||
- "coverage"
|
||||
runs-on: "ubuntu-latest"
|
||||
container: "python:3-slim"
|
||||
@ -173,7 +178,7 @@ jobs:
|
||||
- name: Install Tor [Ubuntu]
|
||||
if: matrix.os == 'ubuntu-latest'
|
||||
run: sudo apt install tor
|
||||
|
||||
|
||||
# TODO: See https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3744.
|
||||
# We have to use an older version of Tor for running integration
|
||||
# tests on macOS.
|
||||
|
2
NEWS.rst
2
NEWS.rst
@ -1188,7 +1188,7 @@ Precautions when Upgrading
|
||||
.. _`#1915`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1915
|
||||
.. _`#1926`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1926
|
||||
.. _`message to the tahoe-dev mailing list`:
|
||||
https://tahoe-lafs.org/pipermail/tahoe-dev/2013-March/008096.html
|
||||
https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2013-March/008079.html
|
||||
|
||||
|
||||
Release 1.9.2 (2012-07-03)
|
||||
|
18
README.rst
18
README.rst
@ -1,5 +1,5 @@
|
||||
======================================
|
||||
Free and Open decentralized data store
|
||||
Free and Open Decentralized Data Store
|
||||
======================================
|
||||
|
||||
|image0|
|
||||
@ -48,13 +48,17 @@ Please read more about Tahoe-LAFS architecture `here <docs/architecture.rst>`__.
|
||||
✅ Installation
|
||||
---------------
|
||||
|
||||
For more detailed instructions, read `docs/INSTALL.rst <docs/INSTALL.rst>`__ .
|
||||
For more detailed instructions, read `Installing Tahoe-LAFS <docs/Installation/install-tahoe.rst>`__.
|
||||
|
||||
- `Building Tahoe-LAFS on Windows <docs/windows.rst>`__
|
||||
|
||||
- `OS-X Packaging <docs/OS-X.rst>`__
|
||||
Once ``tahoe --version`` works, see `How to Run Tahoe-LAFS <docs/running.rst>`__ to learn how to set up your first Tahoe-LAFS node.
|
||||
|
||||
Once tahoe --version works, see `docs/running.rst <docs/running.rst>`__ to learn how to set up your first Tahoe-LAFS node.
|
||||
🐍 Python 3 Support
|
||||
--------------------
|
||||
|
||||
Python 3 support has been introduced starting with Tahoe-LAFS 1.16.0, alongside Python 2.
|
||||
System administrators are advised to start running Tahoe on Python 3 and should expect Python 2 support to be dropped in a future version.
|
||||
Please, feel free to file issues if you run into bugs while running Tahoe on Python 3.
|
||||
|
||||
|
||||
🤖 Issues
|
||||
@ -76,7 +80,7 @@ Get involved with the Tahoe-LAFS community:
|
||||
|
||||
- Join our `weekly conference calls <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/WeeklyMeeting>`__ with core developers and interested community members.
|
||||
|
||||
- Subscribe to `the tahoe-dev mailing list <https://www.tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev>`__, the community forum for discussion of Tahoe-LAFS design, implementation, and usage.
|
||||
- Subscribe to `the tahoe-dev mailing list <https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev>`__, the community forum for discussion of Tahoe-LAFS design, implementation, and usage.
|
||||
|
||||
🤗 Contributing
|
||||
---------------
|
||||
@ -98,6 +102,8 @@ Before authoring or reviewing a patch, please familiarize yourself with the `Cod
|
||||
|
||||
We would like to thank `Fosshost <https://fosshost.org>`__ for supporting us with hosting services. If your open source project needs help, you can apply for their support.
|
||||
|
||||
We are grateful to `Oregon State University Open Source Lab <https://osuosl.org/>`__ for hosting tahoe-dev mailing list.
|
||||
|
||||
❓ FAQ
|
||||
------
|
||||
|
||||
|
@ -65,4 +65,4 @@ If you are working on MacOS or a Linux distribution which does not have Tahoe-LA
|
||||
|
||||
If you are looking to hack on the source code or run pre-release code, we recommend you install Tahoe-LAFS on a `virtualenv` instance. To learn more, see :doc:`install-on-linux`.
|
||||
|
||||
You can always write to the `tahoe-dev mailing list <https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev>`_ or chat on the `Libera.chat IRC <irc://irc.libera.chat/%23tahoe-lafs>`_ if you are not able to get Tahoe-LAFS up and running on your deployment.
|
||||
You can always write to the `tahoe-dev mailing list <https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev>`_ or chat on the `Libera.chat IRC <irc://irc.libera.chat/%23tahoe-lafs>`_ if you are not able to get Tahoe-LAFS up and running on your deployment.
|
||||
|
@ -177,7 +177,7 @@ mutable files, you may be able to avoid the potential for "rollback"
|
||||
failure.
|
||||
|
||||
A future version of Tahoe will include a fix for this issue. Here is
|
||||
[https://tahoe-lafs.org/pipermail/tahoe-dev/2008-May/000630.html the
|
||||
[https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2008-May/000628.html the
|
||||
mailing list discussion] about how that future version will work.
|
||||
|
||||
|
||||
|
@ -268,7 +268,7 @@ For known security issues see
|
||||
.PP
|
||||
Tahoe-LAFS home page: <https://tahoe-lafs.org/>
|
||||
.PP
|
||||
tahoe-dev mailing list: <https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev>
|
||||
tahoe-dev mailing list: <https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev>
|
||||
.SH COPYRIGHT
|
||||
.PP
|
||||
Copyright \@ 2006\[en]2013 The Tahoe-LAFS Software Foundation
|
||||
|
@ -24,11 +24,21 @@ Glossary
|
||||
storage server
|
||||
a Tahoe-LAFS process configured to offer storage and reachable over the network for store and retrieve operations
|
||||
|
||||
storage service
|
||||
a Python object held in memory in the storage server which provides the implementation of the storage protocol
|
||||
|
||||
introducer
|
||||
a Tahoe-LAFS process at a known location configured to re-publish announcements about the location of storage servers
|
||||
|
||||
fURL
|
||||
a self-authenticating URL-like string which can be used to locate a remote object using the Foolscap protocol
|
||||
(the storage service is an example of such an object)
|
||||
|
||||
NURL
|
||||
a self-authenticating URL-like string almost exactly like a fURL but without being tied to Foolscap
|
||||
|
||||
swissnum
|
||||
a short random string which is part of a fURL and which acts as a shared secret to authorize clients to use a storage service
|
||||
|
||||
lease
|
||||
state associated with a share informing a storage server of the duration of storage desired by a client
|
||||
@ -45,7 +55,7 @@ Glossary
|
||||
(sometimes "slot" is considered a synonym for "storage index of a slot")
|
||||
|
||||
storage index
|
||||
a short string which can address a slot or a bucket
|
||||
a 16 byte string which can address a slot or a bucket
|
||||
(in practice, derived by hashing the encryption key associated with contents of that slot or bucket)
|
||||
|
||||
write enabler
|
||||
@ -128,6 +138,8 @@ The Foolscap-based protocol offers:
|
||||
* A careful configuration of the TLS connection parameters *may* also offer **forward secrecy**.
|
||||
However, Tahoe-LAFS' use of Foolscap takes no steps to ensure this is the case.
|
||||
|
||||
* **Storage authorization** by way of a capability contained in the fURL addressing a storage service.
|
||||
|
||||
Discussion
|
||||
!!!!!!!!!!
|
||||
|
||||
@ -158,6 +170,10 @@ there is no way to write data which appears legitimate to a legitimate client).
|
||||
Therefore, **message confidentiality** is necessary when exchanging these secrets.
|
||||
**Forward secrecy** is preferred so that an attacker recording an exchange today cannot launch this attack at some future point after compromising the necessary keys.
|
||||
|
||||
A storage service offers service only to some clients.
|
||||
A client proves their authorization to use the storage service by presenting a shared secret taken from the fURL.
|
||||
In this way **storage authorization** is performed to prevent disallowed parties from consuming any storage resources.
|
||||
|
||||
Functionality
|
||||
-------------
|
||||
|
||||
@ -214,6 +230,10 @@ Additionally,
|
||||
by continuing to interact using TLS,
|
||||
Bob's client and Alice's storage node are assured of both **message authentication** and **message confidentiality**.
|
||||
|
||||
Bob's client further inspects the fURL for the *swissnum*.
|
||||
When Bob's client issues HTTP requests to Alice's storage node it includes the *swissnum* in its requests.
|
||||
**Storage authorization** has been achieved.
|
||||
|
||||
.. note::
|
||||
|
||||
Foolscap TubIDs are 20 bytes (SHA1 digest of the certificate).
|
||||
@ -343,6 +363,12 @@ one branch contains all of the share data;
|
||||
another branch contains all of the lease data;
|
||||
etc.
|
||||
|
||||
Authorization is required for all endpoints.
|
||||
The standard HTTP authorization protocol is used.
|
||||
The authentication *type* used is ``Tahoe-LAFS``.
|
||||
The swissnum from the NURL used to locate the storage service is used as the *credentials*.
|
||||
If credentials are not presented or the swissnum is not associated with a storage service then no storage processing is performed and the request receives an ``UNAUTHORIZED`` response.
|
||||
|
||||
General
|
||||
~~~~~~~
|
||||
|
||||
@ -380,6 +406,10 @@ then the expiration time of that lease will be changed to 31 days after the time
|
||||
If it does not match an existing lease
|
||||
then a new lease will be created with this ``renew-secret`` which expires 31 days after the time of this operation.
|
||||
|
||||
``renew-secret`` and ``cancel-secret`` values must be 32 bytes long.
|
||||
The server treats them as opaque values.
|
||||
:ref:`Share Leases` gives details about how the Tahoe-LAFS storage client constructs these values.
|
||||
|
||||
In these cases the response is ``NO CONTENT`` with an empty body.
|
||||
|
||||
It is possible that the storage server will have no shares for the given ``storage_index`` because:
|
||||
@ -509,31 +539,20 @@ For example::
|
||||
|
||||
[1, 5]
|
||||
|
||||
``GET /v1/immutable/:storage_index?share=:s0&share=:sN&offset=o1&size=z0&offset=oN&size=zN``
|
||||
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
``GET /v1/immutable/:storage_index/:share_number``
|
||||
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
|
||||
Read data from the indicated immutable shares.
|
||||
If ``share`` query parameters are given, selecte only those shares for reading.
|
||||
Otherwise, select all shares present.
|
||||
If ``size`` and ``offset`` query parameters are given,
|
||||
only the portions thus identified of the selected shares are returned.
|
||||
Otherwise, all data is from the selected shares is returned.
|
||||
|
||||
The response body contains a mapping giving the read data.
|
||||
For example::
|
||||
|
||||
{
|
||||
3: ["foo", "bar"],
|
||||
7: ["baz", "quux"]
|
||||
}
|
||||
Read a contiguous sequence of bytes from one share in one bucket.
|
||||
The response body is the raw share data (i.e., ``application/octet-stream``).
|
||||
The ``Range`` header may be used to request exactly one ``bytes`` range.
|
||||
Interpretation and response behavior is as specified in RFC 7233 § 4.1.
|
||||
Multiple ranges in a single request are *not* supported.
|
||||
|
||||
Discussion
|
||||
``````````
|
||||
|
||||
Offset and size of the requested data are specified here as query arguments.
|
||||
Instead, this information could be present in a ``Range`` header in the request.
|
||||
This is the more obvious choice and leverages an HTTP feature built for exactly this use-case.
|
||||
However, HTTP requires that the ``Content-Type`` of the response to "range requests" be ``multipart/...``.
|
||||
Multiple ``bytes`` ranges are not supported.
|
||||
HTTP requires that the ``Content-Type`` of the response in that case be ``multipart/...``.
|
||||
The ``multipart`` major type brings along string sentinel delimiting as a means to frame the different response parts.
|
||||
There are many drawbacks to this framing technique:
|
||||
|
||||
@ -541,6 +560,15 @@ There are many drawbacks to this framing technique:
|
||||
2. It is resource-intensive to parse.
|
||||
3. It is complex to parse safely [#]_ [#]_ [#]_ [#]_.
|
||||
|
||||
A previous revision of this specification allowed requesting one or more contiguous sequences from one or more shares.
|
||||
This *superficially* mirrored the Foolscap based interface somewhat closely.
|
||||
The interface was simplified to this version because this version is all that is required to let clients retrieve any desired information.
|
||||
It only requires that the client issue multiple requests.
|
||||
This can be done with pipelining or parallel requests to avoid an additional latency penalty.
|
||||
In the future,
|
||||
if there are performance goals,
|
||||
benchmarks can demonstrate whether they are achieved by a more complicated interface or some other change.
|
||||
|
||||
Mutable
|
||||
-------
|
||||
|
||||
|
@ -178,8 +178,8 @@ Announcing the Release Candidate
|
||||
````````````````````````````````
|
||||
|
||||
The release-candidate should be announced by posting to the
|
||||
mailing-list (tahoe-dev@tahoe-lafs.org). For example:
|
||||
https://tahoe-lafs.org/pipermail/tahoe-dev/2020-October/009995.html
|
||||
mailing-list (tahoe-dev@lists.tahoe-lafs.org). For example:
|
||||
https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2020-October/009978.html
|
||||
|
||||
|
||||
Is The Release Done Yet?
|
||||
|
@ -238,7 +238,7 @@ You can chat with other users of and hackers of this software on the
|
||||
#tahoe-lafs IRC channel at ``irc.libera.chat``, or on the `tahoe-dev mailing
|
||||
list`_.
|
||||
|
||||
.. _tahoe-dev mailing list: https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
|
||||
.. _tahoe-dev mailing list: https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev
|
||||
|
||||
|
||||
Complain
|
||||
|
87
docs/specifications/derive_renewal_secret.py
Normal file
87
docs/specifications/derive_renewal_secret.py
Normal file
@ -0,0 +1,87 @@
|
||||
|
||||
"""
|
||||
This is a reference implementation of the lease renewal secret derivation
|
||||
protocol in use by Tahoe-LAFS clients as of 1.16.0.
|
||||
"""
|
||||
|
||||
from allmydata.util.base32 import (
|
||||
a2b as b32decode,
|
||||
b2a as b32encode,
|
||||
)
|
||||
from allmydata.util.hashutil import (
|
||||
tagged_hash,
|
||||
tagged_pair_hash,
|
||||
)
|
||||
|
||||
|
||||
def derive_renewal_secret(lease_secret: bytes, storage_index: bytes, tubid: bytes) -> bytes:
|
||||
assert len(lease_secret) == 32
|
||||
assert len(storage_index) == 16
|
||||
assert len(tubid) == 20
|
||||
|
||||
bucket_renewal_tag = b"allmydata_bucket_renewal_secret_v1"
|
||||
file_renewal_tag = b"allmydata_file_renewal_secret_v1"
|
||||
client_renewal_tag = b"allmydata_client_renewal_secret_v1"
|
||||
|
||||
client_renewal_secret = tagged_hash(lease_secret, client_renewal_tag)
|
||||
file_renewal_secret = tagged_pair_hash(
|
||||
file_renewal_tag,
|
||||
client_renewal_secret,
|
||||
storage_index,
|
||||
)
|
||||
peer_id = tubid
|
||||
|
||||
return tagged_pair_hash(bucket_renewal_tag, file_renewal_secret, peer_id)
|
||||
|
||||
def demo():
|
||||
secret = b32encode(derive_renewal_secret(
|
||||
b"lease secretxxxxxxxxxxxxxxxxxxxx",
|
||||
b"storage indexxxx",
|
||||
b"tub idxxxxxxxxxxxxxx",
|
||||
)).decode("ascii")
|
||||
print("An example renewal secret: {}".format(secret))
|
||||
|
||||
def test():
|
||||
# These test vectors created by intrumenting Tahoe-LAFS
|
||||
# bb57fcfb50d4e01bbc4de2e23dbbf7a60c004031 to emit `self.renew_secret` in
|
||||
# allmydata.immutable.upload.ServerTracker.query and then uploading a
|
||||
# couple files to a couple different storage servers.
|
||||
test_vector = [
|
||||
dict(lease_secret=b"boity2cdh7jvl3ltaeebuiobbspjmbuopnwbde2yeh4k6x7jioga",
|
||||
storage_index=b"vrttmwlicrzbt7gh5qsooogr7u",
|
||||
tubid=b"v67jiisoty6ooyxlql5fuucitqiok2ic",
|
||||
expected=b"osd6wmc5vz4g3ukg64sitmzlfiaaordutrez7oxdp5kkze7zp5zq",
|
||||
),
|
||||
dict(lease_secret=b"boity2cdh7jvl3ltaeebuiobbspjmbuopnwbde2yeh4k6x7jioga",
|
||||
storage_index=b"75gmmfts772ww4beiewc234o5e",
|
||||
tubid=b"v67jiisoty6ooyxlql5fuucitqiok2ic",
|
||||
expected=b"35itmusj7qm2pfimh62snbyxp3imreofhx4djr7i2fweta75szda",
|
||||
),
|
||||
dict(lease_secret=b"boity2cdh7jvl3ltaeebuiobbspjmbuopnwbde2yeh4k6x7jioga",
|
||||
storage_index=b"75gmmfts772ww4beiewc234o5e",
|
||||
tubid=b"lh5fhobkjrmkqjmkxhy3yaonoociggpz",
|
||||
expected=b"srrlruge47ws3lm53vgdxprgqb6bz7cdblnuovdgtfkqrygrjm4q",
|
||||
),
|
||||
dict(lease_secret=b"vacviff4xfqxsbp64tdr3frg3xnkcsuwt5jpyat2qxcm44bwu75a",
|
||||
storage_index=b"75gmmfts772ww4beiewc234o5e",
|
||||
tubid=b"lh5fhobkjrmkqjmkxhy3yaonoociggpz",
|
||||
expected=b"b4jledjiqjqekbm2erekzqumqzblegxi23i5ojva7g7xmqqnl5pq",
|
||||
),
|
||||
]
|
||||
|
||||
for n, item in enumerate(test_vector):
|
||||
derived = b32encode(derive_renewal_secret(
|
||||
b32decode(item["lease_secret"]),
|
||||
b32decode(item["storage_index"]),
|
||||
b32decode(item["tubid"]),
|
||||
))
|
||||
assert derived == item["expected"] , \
|
||||
"Test vector {} failed: {} (expected) != {} (derived)".format(
|
||||
n,
|
||||
item["expected"],
|
||||
derived,
|
||||
)
|
||||
print("{} test vectors validated".format(len(test_vector)))
|
||||
|
||||
test()
|
||||
demo()
|
@ -14,5 +14,6 @@ the data formats used by Tahoe.
|
||||
URI-extension
|
||||
mutable
|
||||
dirnodes
|
||||
lease
|
||||
servers-of-happiness
|
||||
backends/raic
|
||||
|
69
docs/specifications/lease.rst
Normal file
69
docs/specifications/lease.rst
Normal file
@ -0,0 +1,69 @@
|
||||
.. -*- coding: utf-8 -*-
|
||||
|
||||
.. _share leases:
|
||||
|
||||
Share Leases
|
||||
============
|
||||
|
||||
A lease is a marker attached to a share indicating that some client has asked for that share to be retained for some amount of time.
|
||||
The intent is to allow clients and servers to collaborate to determine which data should still be retained and which can be discarded to reclaim storage space.
|
||||
Zero or more leases may be attached to any particular share.
|
||||
|
||||
Renewal Secrets
|
||||
---------------
|
||||
|
||||
Each lease is uniquely identified by its **renewal secret**.
|
||||
This is a 32 byte string which can be used to extend the validity period of that lease.
|
||||
|
||||
To a storage server a renewal secret is an opaque value which is only ever compared to other renewal secrets to determine equality.
|
||||
|
||||
Storage clients will typically want to follow a scheme to deterministically derive the renewal secret for a particular share from information the client already holds about that share.
|
||||
This allows a client to maintain and renew single long-lived lease without maintaining additional local state.
|
||||
|
||||
The scheme in use in Tahoe-LAFS as of 1.16.0 is as follows.
|
||||
|
||||
* The **netstring encoding** of a byte string is the concatenation of:
|
||||
|
||||
* the ascii encoding of the base 10 representation of the length of the string
|
||||
* ``":"``
|
||||
* the string itself
|
||||
* ``","``
|
||||
|
||||
* The **sha256d digest** is the **sha256 digest** of the **sha256 digest** of a string.
|
||||
* The **sha256d tagged digest** is the **sha256d digest** of the concatenation of the **netstring encoding** of one string with one other unmodified string.
|
||||
* The **sha256d tagged pair digest** the **sha256d digest** of the concatenation of the **netstring encodings** of each of three strings.
|
||||
* The **bucket renewal tag** is ``"allmydata_bucket_renewal_secret_v1"``.
|
||||
* The **file renewal tag** is ``"allmydata_file_renewal_secret_v1"``.
|
||||
* The **client renewal tag** is ``"allmydata_client_renewal_secret_v1"``.
|
||||
* The **lease secret** is a 32 byte string, typically randomly generated once and then persisted for all future uses.
|
||||
* The **client renewal secret** is the **sha256d tagged digest** of (**lease secret**, **client renewal tag**).
|
||||
* The **storage index** is constructed using a capability-type-specific scheme.
|
||||
See ``storage_index_hash`` and ``ssk_storage_index_hash`` calls in ``src/allmydata/uri.py``.
|
||||
* The **file renewal secret** is the **sha256d tagged pair digest** of (**file renewal tag**, **client renewal secret**, **storage index**).
|
||||
* The **base32 encoding** is ``base64.b32encode`` lowercased and with trailing ``=`` stripped.
|
||||
* The **peer id** is the **base32 encoding** of the SHA1 digest of the server's x509 certificate.
|
||||
* The **renewal secret** is the **sha256d tagged pair digest** of (**bucket renewal tag**, **file renewal secret**, **peer id**).
|
||||
|
||||
A reference implementation is available.
|
||||
|
||||
.. literalinclude:: derive_renewal_secret.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Cancel Secrets
|
||||
--------------
|
||||
|
||||
Lease cancellation is unimplemented.
|
||||
Nevertheless,
|
||||
a cancel secret is sent by storage clients to storage servers and stored in lease records.
|
||||
|
||||
The scheme for deriving **cancel secret** in use in Tahoe-LAFS as of 1.16.0 is similar to that used to derive the **renewal secret**.
|
||||
|
||||
The differences are:
|
||||
|
||||
* Use of **client renewal tag** is replaced by use of **client cancel tag**.
|
||||
* Use of **file renewal secret** is replaced by use of **file cancel tag**.
|
||||
* Use of **bucket renewal tag** is replaced by use of **bucket cancel tag**.
|
||||
* **client cancel tag** is ``"allmydata_client_cancel_secret_v1"``.
|
||||
* **file cancel tag** is ``"allmydata_file_cancel_secret_v1"``.
|
||||
* **bucket cancel tag** is ``"allmydata_bucket_cancel_secret_v1"``.
|
@ -16,7 +16,7 @@ if git diff-index --quiet HEAD; then
|
||||
fi
|
||||
|
||||
git config user.name 'Build Automation'
|
||||
git config user.email 'tahoe-dev@tahoe-lafs.org'
|
||||
git config user.email 'tahoe-dev@lists.tahoe-lafs.org'
|
||||
|
||||
git add tahoe-deps.json tahoe-ported.json
|
||||
git commit -m "\
|
||||
|
1
newsfragments/3749.documentation
Normal file
1
newsfragments/3749.documentation
Normal file
@ -0,0 +1 @@
|
||||
Documentation and installation links in the README have been fixed.
|
1
newsfragments/3774.documentation
Normal file
1
newsfragments/3774.documentation
Normal file
@ -0,0 +1 @@
|
||||
There is now a specification for the scheme which Tahoe-LAFS storage clients use to derive their lease renewal secrets.
|
1
newsfragments/3777.documentation
Normal file
1
newsfragments/3777.documentation
Normal file
@ -0,0 +1 @@
|
||||
The Great Black Swamp proposed specification now has a simplified interface for reading data from immutable shares.
|
1
newsfragments/3779.bugfix
Normal file
1
newsfragments/3779.bugfix
Normal file
@ -0,0 +1 @@
|
||||
Fixed bug where share corruption events were not logged on storage servers running on Windows.
|
0
newsfragments/3781.minor
Normal file
0
newsfragments/3781.minor
Normal file
1
newsfragments/3782.documentation
Normal file
1
newsfragments/3782.documentation
Normal file
@ -0,0 +1 @@
|
||||
tahoe-dev mailing list is now at tahoe-dev@lists.tahoe-lafs.org.
|
1
newsfragments/3785.documentation
Normal file
1
newsfragments/3785.documentation
Normal file
@ -0,0 +1 @@
|
||||
The Great Black Swamp specification now describes the required authorization scheme.
|
0
newsfragments/3792.minor
Normal file
0
newsfragments/3792.minor
Normal file
@ -155,7 +155,7 @@ Planet Earth
|
||||
[4] https://github.com/tahoe-lafs/tahoe-lafs/blob/tahoe-lafs-1.15.1/COPYING.GPL
|
||||
[5] https://github.com/tahoe-lafs/tahoe-lafs/blob/tahoe-lafs-1.15.1/COPYING.TGPPL.rst
|
||||
[6] https://tahoe-lafs.readthedocs.org/en/tahoe-lafs-1.15.1/INSTALL.html
|
||||
[7] https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
|
||||
[7] https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev
|
||||
[8] https://tahoe-lafs.org/trac/tahoe-lafs/roadmap
|
||||
[9] https://github.com/tahoe-lafs/tahoe-lafs/blob/master/CREDITS
|
||||
[10] https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Dev
|
||||
|
2
setup.py
2
setup.py
@ -359,7 +359,7 @@ setup(name="tahoe-lafs", # also set in __init__.py
|
||||
description='secure, decentralized, fault-tolerant file store',
|
||||
long_description=open('README.rst', 'r', encoding='utf-8').read(),
|
||||
author='the Tahoe-LAFS project',
|
||||
author_email='tahoe-dev@tahoe-lafs.org',
|
||||
author_email='tahoe-dev@lists.tahoe-lafs.org',
|
||||
url='https://tahoe-lafs.org/',
|
||||
license='GNU GPL', # see README.rst -- there is an alternative licence
|
||||
cmdclass={"update_version": UpdateVersion,
|
||||
|
@ -475,7 +475,9 @@ class Share(object):
|
||||
# there was corruption somewhere in the given range
|
||||
reason = "corruption in share[%d-%d): %s" % (start, start+offset,
|
||||
str(f.value))
|
||||
self._rref.callRemoteOnly("advise_corrupt_share", reason.encode("utf-8"))
|
||||
self._rref.callRemote(
|
||||
"advise_corrupt_share", reason.encode("utf-8")
|
||||
).addErrback(log.err, "Error from remote call to advise_corrupt_share")
|
||||
|
||||
def _satisfy_block_hash_tree(self, needed_hashes):
|
||||
o_bh = self.actual_offsets["block_hashes"]
|
||||
|
@ -15,7 +15,7 @@ from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \
|
||||
FileTooLargeError, HASH_SIZE
|
||||
from allmydata.util import mathutil, observer, pipeline
|
||||
from allmydata.util import mathutil, observer, pipeline, log
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.storage.server import si_b2a
|
||||
|
||||
@ -254,8 +254,7 @@ class WriteBucketProxy(object):
|
||||
return d
|
||||
|
||||
def abort(self):
|
||||
return self._rref.callRemoteOnly("abort")
|
||||
|
||||
self._rref.callRemote("abort").addErrback(log.err, "Error from remote call to abort an immutable write bucket")
|
||||
|
||||
def get_servername(self):
|
||||
return self._server.get_name()
|
||||
|
@ -607,13 +607,14 @@ class ServermapUpdater(object):
|
||||
return d
|
||||
|
||||
def _do_read(self, server, storage_index, shnums, readv):
|
||||
"""
|
||||
If self._add_lease is true, a lease is added, and the result only fires
|
||||
once the least has also been added.
|
||||
"""
|
||||
ss = server.get_storage_server()
|
||||
if self._add_lease:
|
||||
# send an add-lease message in parallel. The results are handled
|
||||
# separately. This is sent before the slot_readv() so that we can
|
||||
# be sure the add_lease is retired by the time slot_readv comes
|
||||
# back (this relies upon our knowledge that the server code for
|
||||
# add_lease is synchronous).
|
||||
# separately.
|
||||
renew_secret = self._node.get_renewal_secret(server)
|
||||
cancel_secret = self._node.get_cancel_secret(server)
|
||||
d2 = ss.add_lease(
|
||||
@ -623,7 +624,16 @@ class ServermapUpdater(object):
|
||||
)
|
||||
# we ignore success
|
||||
d2.addErrback(self._add_lease_failed, server, storage_index)
|
||||
else:
|
||||
d2 = defer.succeed(None)
|
||||
d = ss.slot_readv(storage_index, shnums, readv)
|
||||
|
||||
def passthrough(result):
|
||||
# Wait for d2, but fire with result of slot_readv() regardless of
|
||||
# result of d2.
|
||||
return d2.addBoth(lambda _: result)
|
||||
|
||||
d.addCallback(passthrough)
|
||||
return d
|
||||
|
||||
|
||||
|
@ -236,8 +236,6 @@ def _maybe_enable_eliot_logging(options, reactor):
|
||||
# Pass on the options so we can dispatch the subcommand.
|
||||
return options
|
||||
|
||||
PYTHON_3_WARNING = ("Support for Python 3 is an incomplete work-in-progress."
|
||||
" Use at your own risk.")
|
||||
|
||||
def run(configFactory=Options, argv=sys.argv, stdout=sys.stdout, stderr=sys.stderr):
|
||||
"""
|
||||
@ -253,8 +251,6 @@ def run(configFactory=Options, argv=sys.argv, stdout=sys.stdout, stderr=sys.stde
|
||||
|
||||
:raise SystemExit: Always raised after the run is complete.
|
||||
"""
|
||||
if six.PY3:
|
||||
print(PYTHON_3_WARNING, file=stderr)
|
||||
if sys.platform == "win32":
|
||||
from allmydata.windows.fixups import initialize
|
||||
initialize()
|
||||
|
@ -708,8 +708,10 @@ class StorageServer(service.MultiService, Referenceable):
|
||||
now = time_format.iso_utc(sep="T")
|
||||
si_s = si_b2a(storage_index)
|
||||
# windows can't handle colons in the filename
|
||||
fn = os.path.join(self.corruption_advisory_dir,
|
||||
"%s--%s-%d" % (now, str(si_s, "utf-8"), shnum)).replace(":","")
|
||||
fn = os.path.join(
|
||||
self.corruption_advisory_dir,
|
||||
("%s--%s-%d" % (now, str(si_s, "utf-8"), shnum)).replace(":","")
|
||||
)
|
||||
with open(fn, "w") as f:
|
||||
f.write("report: Share Corruption\n")
|
||||
f.write("type: %s\n" % bytes_to_native_str(share_type))
|
||||
|
@ -1009,10 +1009,10 @@ class _StorageServer(object):
|
||||
shnum,
|
||||
reason,
|
||||
):
|
||||
return self._rref.callRemoteOnly(
|
||||
self._rref.callRemote(
|
||||
"advise_corrupt_share",
|
||||
share_type,
|
||||
storage_index,
|
||||
shnum,
|
||||
reason,
|
||||
)
|
||||
).addErrback(log.err, "Error from remote call to advise_corrupt_share")
|
||||
|
@ -96,8 +96,14 @@ class FakeStorage(object):
|
||||
shares[shnum] = f.getvalue()
|
||||
|
||||
|
||||
# This doesn't actually implement the whole interface, but adding a commented
|
||||
# interface implementation annotation for grepping purposes.
|
||||
#@implementer(RIStorageServer)
|
||||
class FakeStorageServer(object):
|
||||
|
||||
"""
|
||||
A fake Foolscap remote object, implemented by overriding callRemote() to
|
||||
call local methods.
|
||||
"""
|
||||
def __init__(self, peerid, storage):
|
||||
self.peerid = peerid
|
||||
self.storage = storage
|
||||
|
@ -38,7 +38,6 @@ from allmydata.monitor import Monitor
|
||||
from allmydata.mutable.common import NotWriteableError
|
||||
from allmydata.mutable import layout as mutable_layout
|
||||
from allmydata.mutable.publish import MutableData
|
||||
from allmydata.scripts.runner import PYTHON_3_WARNING
|
||||
|
||||
from foolscap.api import DeadReferenceError, fireEventually
|
||||
from twisted.python.failure import Failure
|
||||
@ -1749,18 +1748,16 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
newargs = ["--node-directory", self.getdir("client0"), verb] + list(args)
|
||||
return self.run_bintahoe(newargs, stdin=stdin, env=env)
|
||||
|
||||
def _check_succeeded(res, check_stderr=True):
|
||||
def _check_succeeded(res):
|
||||
out, err, rc_or_sig = res
|
||||
self.failUnlessEqual(rc_or_sig, 0, str(res))
|
||||
if check_stderr:
|
||||
self.assertIn(err.strip(), (b"", PYTHON_3_WARNING.encode("ascii")))
|
||||
|
||||
d.addCallback(_run_in_subprocess, "create-alias", "newalias")
|
||||
d.addCallback(_check_succeeded)
|
||||
|
||||
STDIN_DATA = b"This is the file to upload from stdin."
|
||||
d.addCallback(_run_in_subprocess, "put", "-", "newalias:tahoe-file", stdin=STDIN_DATA)
|
||||
d.addCallback(_check_succeeded, check_stderr=False)
|
||||
d.addCallback(_check_succeeded)
|
||||
|
||||
def _mv_with_http_proxy(ign):
|
||||
env = os.environ
|
||||
@ -1773,7 +1770,6 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
def _check_ls(res):
|
||||
out, err, rc_or_sig = res
|
||||
self.failUnlessEqual(rc_or_sig, 0, str(res))
|
||||
self.assertIn(err.strip(), (b"", PYTHON_3_WARNING.encode("ascii")))
|
||||
self.failUnlessIn(b"tahoe-moved", out)
|
||||
self.failIfIn(b"tahoe-file", out)
|
||||
d.addCallback(_check_ls)
|
||||
|
@ -122,7 +122,15 @@ class SetDEPMixin(object):
|
||||
}
|
||||
self.node.encoding_params = p
|
||||
|
||||
|
||||
# This doesn't actually implement the whole interface, but adding a commented
|
||||
# interface implementation annotation for grepping purposes.
|
||||
#@implementer(RIStorageServer)
|
||||
class FakeStorageServer(object):
|
||||
"""
|
||||
A fake Foolscap remote object, implemented by overriding callRemote() to
|
||||
call local methods.
|
||||
"""
|
||||
def __init__(self, mode, reactor=None):
|
||||
self.mode = mode
|
||||
self.allocated = []
|
||||
|
Loading…
x
Reference in New Issue
Block a user