Merge branch '3792.upload-more-logs' into 3525.test_status-no-mock

This commit is contained in:
Jean-Paul Calderone 2021-09-08 12:48:53 -04:00
commit 0c4fd2fd20
5 changed files with 34 additions and 24 deletions

View File

@ -271,6 +271,11 @@ jobs:
# in the project source checkout.
path: "/tmp/project/_trial_temp/test.log"
- store_artifacts: &STORE_ELIOT_LOG
# Despite passing --workdir /tmp to tox above, it still runs trial
# in the project source checkout.
path: "/tmp/project/eliot.log"
- store_artifacts: &STORE_OTHER_ARTIFACTS
# Store any other artifacts, too. This is handy to allow other jobs
# sharing most of the definition of this one to be able to
@ -413,6 +418,7 @@ jobs:
- run: *RUN_TESTS
- store_test_results: *STORE_TEST_RESULTS
- store_artifacts: *STORE_TEST_LOG
- store_artifacts: *STORE_ELIOT_LOG
- store_artifacts: *STORE_OTHER_ARTIFACTS
- run: *SUBMIT_COVERAGE

View File

@ -76,13 +76,18 @@ jobs:
- name: Run tox for corresponding Python version
run: python -m tox
- name: Upload eliot.log in case of failure
- name: Upload eliot.log
uses: actions/upload-artifact@v1
if: failure()
with:
name: eliot.log
path: eliot.log
- name: Upload trial log
uses: actions/upload-artifact@v1
with:
name: test.log
path: _trial_temp/test.log
# Upload this job's coverage data to Coveralls. While there is a GitHub
# Action for this, as of Jan 2021 it does not support Python coverage
# files - only lcov files. Therefore, we use coveralls-python, the
@ -136,7 +141,7 @@ jobs:
# See notes about parallel builds on GitHub Actions at
# https://coveralls-python.readthedocs.io/en/latest/usage/configuration.html
finish-coverage-report:
needs:
needs:
- "coverage"
runs-on: "ubuntu-latest"
container: "python:3-slim"
@ -173,7 +178,7 @@ jobs:
- name: Install Tor [Ubuntu]
if: matrix.os == 'ubuntu-latest'
run: sudo apt install tor
# TODO: See https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3744.
# We have to use an older version of Tor for running integration
# tests on macOS.

View File

@ -539,31 +539,20 @@ For example::
[1, 5]
``GET /v1/immutable/:storage_index?share=:s0&share=:sN&offset=o1&size=z0&offset=oN&size=zN``
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
``GET /v1/immutable/:storage_index/:share_number``
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Read data from the indicated immutable shares.
If ``share`` query parameters are given, selecte only those shares for reading.
Otherwise, select all shares present.
If ``size`` and ``offset`` query parameters are given,
only the portions thus identified of the selected shares are returned.
Otherwise, all data is from the selected shares is returned.
The response body contains a mapping giving the read data.
For example::
{
3: ["foo", "bar"],
7: ["baz", "quux"]
}
Read a contiguous sequence of bytes from one share in one bucket.
The response body is the raw share data (i.e., ``application/octet-stream``).
The ``Range`` header may be used to request exactly one ``bytes`` range.
Interpretation and response behavior is as specified in RFC 7233 § 4.1.
Multiple ranges in a single request are *not* supported.
Discussion
``````````
Offset and size of the requested data are specified here as query arguments.
Instead, this information could be present in a ``Range`` header in the request.
This is the more obvious choice and leverages an HTTP feature built for exactly this use-case.
However, HTTP requires that the ``Content-Type`` of the response to "range requests" be ``multipart/...``.
Multiple ``bytes`` ranges are not supported.
HTTP requires that the ``Content-Type`` of the response in that case be ``multipart/...``.
The ``multipart`` major type brings along string sentinel delimiting as a means to frame the different response parts.
There are many drawbacks to this framing technique:
@ -571,6 +560,15 @@ There are many drawbacks to this framing technique:
2. It is resource-intensive to parse.
3. It is complex to parse safely [#]_ [#]_ [#]_ [#]_.
A previous revision of this specification allowed requesting one or more contiguous sequences from one or more shares.
This *superficially* mirrored the Foolscap based interface somewhat closely.
The interface was simplified to this version because this version is all that is required to let clients retrieve any desired information.
It only requires that the client issue multiple requests.
This can be done with pipelining or parallel requests to avoid an additional latency penalty.
In the future,
if there are performance goals,
benchmarks can demonstrate whether they are achieved by a more complicated interface or some other change.
Mutable
-------

View File

@ -0,0 +1 @@
The Great Black Swamp proposed specification now has a simplified interface for reading data from immutable shares.

0
newsfragments/3792.minor Normal file
View File