Merge remote-tracking branch 'origin/master' into patch-1

This commit is contained in:
Jean-Paul Calderone 2021-01-26 09:57:05 -05:00
commit c451d947ff
178 changed files with 3002 additions and 3302 deletions

View File

@ -91,6 +91,9 @@ workflows:
- "build-porting-depgraph":
<<: *DOCKERHUB_CONTEXT
- "typechecks":
<<: *DOCKERHUB_CONTEXT
images:
# Build the Docker images used by the ci jobs. This makes the ci jobs
# faster and takes various spurious failures out of the critical path.
@ -475,6 +478,18 @@ jobs:
. /tmp/venv/bin/activate
./misc/python3/depgraph.sh
typechecks:
docker:
- <<: *DOCKERHUB_AUTH
image: "tahoelafsci/ubuntu:18.04-py3"
steps:
- "checkout"
- run:
name: "Validate Types"
command: |
/tmp/venv/bin/tox -e typechecks
build-image: &BUILD_IMAGE
# This is a template for a job to build a Docker image that has as much of
# the setup as we can manage already done and baked in. This cuts down on

View File

@ -32,3 +32,17 @@ coverage:
patch:
default:
threshold: 1%
codecov:
# This is a public repository so supposedly we don't "need" to use an upload
# token. However, using one makes sure that CI jobs running against forked
# repositories have coverage uploaded to the right place in codecov so
# their reports aren't incomplete.
token: "abf679b6-e2e6-4b33-b7b5-6cfbd41ee691"
notify:
# The reference documentation suggests that this is the default setting:
# https://docs.codecov.io/docs/codecovyml-reference#codecovnotifywait_for_ci
# However observation suggests otherwise.
wait_for_ci: true

View File

@ -30,17 +30,37 @@ jobs:
with:
args: install vcpython27
# See https://github.com/actions/checkout. A fetch-depth of 0
# fetches all tags and branches.
- name: Check out Tahoe-LAFS sources
uses: actions/checkout@v2
- name: Fetch all history for all tags and branches
run: git fetch --prune --unshallow
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
# To use pip caching with GitHub Actions in an OS-independent
# manner, we need `pip cache dir` command, which became
# available since pip v20.1+. At the time of writing this,
# GitHub Actions offers pip v20.3.3 for both ubuntu-latest and
# windows-latest, and pip v20.3.1 for macos-latest.
- name: Get pip cache directory
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
# See https://github.com/actions/cache
- name: Use pip cache
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install Python packages
run: |
pip install --upgrade codecov tox setuptools
@ -103,15 +123,27 @@ jobs:
- name: Check out Tahoe-LAFS sources
uses: actions/checkout@v2
- name: Fetch all history for all tags and branches
run: git fetch --prune --unshallow
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Get pip cache directory
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Use pip cache
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install Python packages
run: |
pip install --upgrade tox
@ -155,15 +187,27 @@ jobs:
- name: Check out Tahoe-LAFS sources
uses: actions/checkout@v2
- name: Fetch all history for all tags and branches
run: git fetch --prune --unshallow
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Get pip cache directory
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Use pip cache
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install Python packages
run: |
pip install --upgrade tox

View File

@ -207,3 +207,8 @@ D: various bug-fixes and features
N: Viktoriia Savchuk
W: https://twitter.com/viktoriiasvchk
D: Developer community focused improvements on the README file.
N: Lukas Pirl
E: tahoe@lukas-pirl.de
W: http://lukas-pirl.de
D: Buildslaves (Debian, Fedora, CentOS; 2016-2021)

View File

@ -67,12 +67,12 @@ Here's how it works:
A "storage grid" is made up of a number of storage servers. A storage server
has direct attached storage (typically one or more hard disks). A "gateway"
communicates with storage nodes, and uses them to provide access to the
grid over protocols such as HTTP(S), SFTP or FTP.
grid over protocols such as HTTP(S) and SFTP.
Note that you can find "client" used to refer to gateway nodes (which act as
a client to storage servers), and also to processes or programs connecting to
a gateway node and performing operations on the grid -- for example, a CLI
command, Web browser, SFTP client, or FTP client.
command, Web browser, or SFTP client.
Users do not rely on storage servers to provide *confidentiality* nor
*integrity* for their data -- instead all of the data is encrypted and

View File

@ -273,7 +273,7 @@ Then, do the following:
[connections]
tcp = tor
* Launch the Tahoe server with ``tahoe start $NODEDIR``
* Launch the Tahoe server with ``tahoe run $NODEDIR``
The ``tub.port`` section will cause the Tahoe server to listen on PORT, but
bind the listening socket to the loopback interface, which is not reachable
@ -435,4 +435,3 @@ It is therefore important that your I2P router is sharing bandwidth with other
routers, so that you can give back as you use I2P. This will never impair the
performance of your Tahoe-LAFS node, because your I2P router will always
prioritize your own traffic.

View File

@ -28,7 +28,7 @@ import os
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
extensions = ['recommonmark']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -36,7 +36,7 @@ templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_suffix = ['.rst', '.md']
# The encoding of source files.
#source_encoding = 'utf-8-sig'

View File

@ -81,7 +81,6 @@ Client/server nodes provide one or more of the following services:
* web-API service
* SFTP service
* FTP service
* helper service
* storage service.
@ -365,7 +364,7 @@ set the ``tub.location`` option described below.
also generally reduced when operating in private mode.
When False, any of the following configuration problems will cause
``tahoe start`` to throw a PrivacyError instead of starting the node:
``tahoe run`` to throw a PrivacyError instead of starting the node:
* ``[node] tub.location`` contains any ``tcp:`` hints
@ -708,12 +707,12 @@ CLI
file store, uploading/downloading files, and creating/running Tahoe
nodes. See :doc:`frontends/CLI` for details.
SFTP, FTP
SFTP
Tahoe can also run both SFTP and FTP servers, and map a username/password
Tahoe can also run SFTP servers, and map a username/password
pair to a top-level Tahoe directory. See :doc:`frontends/FTP-and-SFTP`
for instructions on configuring these services, and the ``[sftpd]`` and
``[ftpd]`` sections of ``tahoe.cfg``.
for instructions on configuring this service, and the ``[sftpd]``
section of ``tahoe.cfg``.
Storage Server Configuration

1
docs/contributing.rst Normal file
View File

@ -0,0 +1 @@
.. include:: ../.github/CONTRIBUTING.rst

View File

@ -85,7 +85,7 @@ Node Management
"``tahoe create-node [NODEDIR]``" is the basic make-a-new-node
command. It creates a new directory and populates it with files that
will allow the "``tahoe start``" and related commands to use it later
will allow the "``tahoe run``" and related commands to use it later
on. ``tahoe create-node`` creates nodes that have client functionality
(upload/download files), web API services (controlled by the
'[node]web.port' configuration), and storage services (unless
@ -94,8 +94,7 @@ on. ``tahoe create-node`` creates nodes that have client functionality
NODEDIR defaults to ``~/.tahoe/`` , and newly-created nodes default to
publishing a web server on port 3456 (limited to the loopback interface, at
127.0.0.1, to restrict access to other programs on the same host). All of the
other "``tahoe``" subcommands use corresponding defaults (with the exception
that "``tahoe run``" defaults to running a node in the current directory).
other "``tahoe``" subcommands use corresponding defaults.
"``tahoe create-client [NODEDIR]``" creates a node with no storage service.
That is, it behaves like "``tahoe create-node --no-storage [NODEDIR]``".
@ -117,25 +116,6 @@ the same way on all platforms and logs to stdout. If you want to run
the process as a daemon, it is recommended that you use your favourite
daemonization tool.
The now-deprecated "``tahoe start [NODEDIR]``" command will launch a
previously-created node. It will launch the node into the background
using ``tahoe daemonize`` (and internal-only command, not for user
use). On some platforms (including Windows) this command is unable to
run a daemon in the background; in that case it behaves in the same
way as "``tahoe run``". ``tahoe start`` also monitors the logs for up
to 5 seconds looking for either a succesful startup message or for
early failure messages and produces an appropriate exit code. You are
encouraged to use ``tahoe run`` along with your favourite
daemonization tool instead of this. ``tahoe start`` is maintained for
backwards compatibility of users already using it; new scripts should
depend on ``tahoe run``.
"``tahoe stop [NODEDIR]``" will shut down a running node. "``tahoe
restart [NODEDIR]``" will stop and then restart a running
node. Similar to above, you should use ``tahoe run`` instead alongside
your favourite daemonization tool.
File Store Manipulation
=======================

View File

@ -1,22 +1,21 @@
.. -*- coding: utf-8-with-signature -*-
=================================
Tahoe-LAFS SFTP and FTP Frontends
=================================
========================
Tahoe-LAFS SFTP Frontend
========================
1. `SFTP/FTP Background`_
1. `SFTP Background`_
2. `Tahoe-LAFS Support`_
3. `Creating an Account File`_
4. `Running An Account Server (accounts.url)`_
5. `Configuring SFTP Access`_
6. `Configuring FTP Access`_
7. `Dependencies`_
8. `Immutable and Mutable Files`_
9. `Known Issues`_
6. `Dependencies`_
7. `Immutable and Mutable Files`_
8. `Known Issues`_
SFTP/FTP Background
===================
SFTP Background
===============
FTP is the venerable internet file-transfer protocol, first developed in
1971. The FTP server usually listens on port 21. A separate connection is
@ -33,20 +32,18 @@ Both FTP and SFTP were developed assuming a UNIX-like server, with accounts
and passwords, octal file modes (user/group/other, read/write/execute), and
ctime/mtime timestamps.
We recommend SFTP over FTP, because the protocol is better, and the server
implementation in Tahoe-LAFS is more complete. See `Known Issues`_, below,
for details.
Previous versions of Tahoe-LAFS supported FTP, but now only the superior SFTP
frontend is supported. See `Known Issues`_, below, for details on the
limitations of SFTP.
Tahoe-LAFS Support
==================
All Tahoe-LAFS client nodes can run a frontend SFTP server, allowing regular
SFTP clients (like ``/usr/bin/sftp``, the ``sshfs`` FUSE plugin, and many
others) to access the file store. They can also run an FTP server, so FTP
clients (like ``/usr/bin/ftp``, ``ncftp``, and others) can too. These
frontends sit at the same level as the web-API interface.
others) to access the file store.
Since Tahoe-LAFS does not use user accounts or passwords, the SFTP/FTP
Since Tahoe-LAFS does not use user accounts or passwords, the SFTP
servers must be configured with a way to first authenticate a user (confirm
that a prospective client has a legitimate claim to whatever authorities we
might grant a particular user), and second to decide what directory cap
@ -173,39 +170,6 @@ clients and with the sshfs filesystem, see wiki:SftpFrontend_
.. _wiki:SftpFrontend: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend
Configuring FTP Access
======================
To enable the FTP server with an accounts file, add the following lines to
the BASEDIR/tahoe.cfg file::
[ftpd]
enabled = true
port = tcp:8021:interface=127.0.0.1
accounts.file = private/accounts
The FTP server will listen on the given port number and on the loopback
interface only. The "accounts.file" pathname will be interpreted relative to
the node's BASEDIR.
To enable the FTP server with an account server instead, provide the URL of
that server in an "accounts.url" directive::
[ftpd]
enabled = true
port = tcp:8021:interface=127.0.0.1
accounts.url = https://example.com/login
You can provide both accounts.file and accounts.url, although it probably
isn't very useful except for testing.
FTP provides no security, and so your password or caps could be eavesdropped
if you connect to the FTP server remotely. The examples above include
":interface=127.0.0.1" in the "port" option, which causes the server to only
accept connections from localhost.
Public key authentication is not supported for FTP.
Dependencies
============
@ -216,7 +180,7 @@ separately: debian puts it in the "python-twisted-conch" package.
Immutable and Mutable Files
===========================
All files created via SFTP (and FTP) are immutable files. However, files can
All files created via SFTP are immutable files. However, files can
only be created in writeable directories, which allows the directory entry to
be relinked to a different file. Normally, when the path of an immutable file
is opened for writing by SFTP, the directory entry is relinked to another
@ -256,18 +220,3 @@ See also wiki:SftpFrontend_.
.. _ticket #1059: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1059
.. _ticket #1089: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1089
Known Issues in the FTP Frontend
--------------------------------
Mutable files are not supported by the FTP frontend (`ticket #680`_).
Non-ASCII filenames are not supported by FTP (`ticket #682`_).
The FTP frontend sometimes fails to report errors, for example if an upload
fails because it does meet the "servers of happiness" threshold (`ticket
#1081`_).
.. _ticket #680: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/680
.. _ticket #682: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/682
.. _ticket #1081: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1081

View File

@ -2032,10 +2032,11 @@ potential for surprises when the file store structure is changed.
Tahoe-LAFS provides a mutable file store, but the ways that the store can
change are limited. The only things that can change are:
* the mapping from child names to child objects inside mutable directories
(by adding a new child, removing an existing child, or changing an
existing child to point to a different object)
* the contents of mutable files
* the mapping from child names to child objects inside mutable directories
(by adding a new child, removing an existing child, or changing an
existing child to point to a different object)
* the contents of mutable files
Obviously if you query for information about the file store and then act
to change it (such as by getting a listing of the contents of a mutable
@ -2145,7 +2146,7 @@ you could do the following::
tahoe debug dump-cap URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
-> storage index: whpepioyrnff7orecjolvbudeu
echo "whpepioyrnff7orecjolvbudeu my puppy told me to" >>$NODEDIR/access.blacklist
tahoe restart $NODEDIR
# ... restart the node to re-read configuration ...
tahoe get URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
-> error, 403 Access Prohibited: my puppy told me to
@ -2157,7 +2158,7 @@ When modifying the file, be careful to update it atomically, otherwise a
request may arrive while the file is only halfway written, and the partial
file may be incorrectly parsed.
The blacklist is applied to all access paths (including SFTP, FTP, and CLI
The blacklist is applied to all access paths (including SFTP and CLI
operations), not just the web-API. The blacklist also applies to directories.
If a directory is blacklisted, the gateway will refuse access to both that
directory and any child files/directories underneath it, when accessed via

View File

@ -122,7 +122,7 @@ Who should consider using a Helper?
* clients who experience problems with TCP connection fairness: if other
programs or machines in the same home are getting less than their fair
share of upload bandwidth. If the connection is being shared fairly, then
a Tahoe upload that is happening at the same time as a single FTP upload
a Tahoe upload that is happening at the same time as a single SFTP upload
should get half the bandwidth.
* clients who have been given the helper.furl by someone who is running a
Helper and is willing to let them use it

View File

@ -23,7 +23,7 @@ Config setting File Comment
``BASEDIR/introducer.furl`` ``BASEDIR/private/introducers.yaml``
``[client]helper.furl`` ``BASEDIR/helper.furl``
``[client]key_generator.furl`` ``BASEDIR/key_generator.furl``
``[client]stats_gatherer.furl`` ``BASEDIR/stats_gatherer.furl``
``BASEDIR/stats_gatherer.furl`` Stats gatherer has been removed.
``[storage]enabled`` ``BASEDIR/no_storage`` (``False`` if ``no_storage`` exists)
``[storage]readonly`` ``BASEDIR/readonly_storage`` (``True`` if ``readonly_storage`` exists)
``[storage]sizelimit`` ``BASEDIR/sizelimit``
@ -47,3 +47,10 @@ the now (since Tahoe-LAFS v1.3.0) unsupported
addresses specified in ``advertised_ip_addresses`` were used in
addition to any that were automatically discovered), whereas the new
``tahoe.cfg`` directive is not (``tub.location`` is used verbatim).
The stats gatherer has been broken at least since Tahoe-LAFS v1.13.0.
The (broken) functionality of ``[client]stats_gatherer.furl`` (which
was previously in ``BASEDIR/stats_gatherer.furl``), is scheduled to be
completely removed after Tahoe-LAFS v1.15.0. After that point, if
your configuration contains a ``[client]stats_gatherer.furl``, your
node will refuse to start.

View File

@ -23,8 +23,9 @@ Contents:
frontends/download-status
known_issues
../.github/CONTRIBUTING
contributing
CODE_OF_CONDUCT
release-checklist
servers
helper

View File

@ -23,7 +23,7 @@ Known Issues in Tahoe-LAFS v1.10.3, released 30-Mar-2016
* `Disclosure of file through embedded hyperlinks or JavaScript in that file`_
* `Command-line arguments are leaked to other local users`_
* `Capabilities may be leaked to web browser phishing filter / "safe browsing" servers`_
* `Known issues in the FTP and SFTP frontends`_
* `Known issues in the SFTP frontend`_
* `Traffic analysis based on sizes of files/directories, storage indices, and timing`_
* `Privacy leak via Google Chart API link in map-update timing web page`_
@ -213,8 +213,8 @@ To disable the filter in Chrome:
----
Known issues in the FTP and SFTP frontends
------------------------------------------
Known issues in the SFTP frontend
---------------------------------
These are documented in :doc:`frontends/FTP-and-SFTP` and on `the
SftpFrontend page`_ on the wiki.

View File

@ -128,10 +128,9 @@ provided in ``misc/incident-gatherer/support_classifiers.py`` . There is
roughly one category for each ``log.WEIRD``-or-higher level event in the
Tahoe source code.
The incident gatherer is created with the "``flogtool
create-incident-gatherer WORKDIR``" command, and started with "``tahoe
start``". The generated "``gatherer.tac``" file should be modified to add
classifier functions.
The incident gatherer is created with the "``flogtool create-incident-gatherer
WORKDIR``" command, and started with "``tahoe run``". The generated
"``gatherer.tac``" file should be modified to add classifier functions.
The incident gatherer writes incident names (which are simply the relative
pathname of the ``incident-\*.flog.bz2`` file) into ``classified/CATEGORY``.
@ -175,7 +174,7 @@ things that happened on multiple machines (such as comparing a client node
making a request with the storage servers that respond to that request).
Create the Log Gatherer with the "``flogtool create-gatherer WORKDIR``"
command, and start it with "``tahoe start``". Then copy the contents of the
command, and start it with "``twistd -ny gatherer.tac``". Then copy the contents of the
``log_gatherer.furl`` file it creates into the ``BASEDIR/tahoe.cfg`` file
(under the key ``log_gatherer.furl`` of the section ``[node]``) of all nodes
that should be sending it log events. (See :doc:`configuration`)

View File

@ -40,23 +40,31 @@ Create Branch and Apply Updates
- Create a branch for release-candidates (e.g. `XXXX.release-1.15.0.rc0`)
- run `tox -e news` to produce a new NEWS.txt file (this does a commit)
- create the news for the release
- newsfragments/<ticket number>.minor
- commit it
- manually fix NEWS.txt
- proper title for latest release ("Release 1.15.0" instead of "Release ...post1432")
- double-check date (maybe release will be in the future)
- spot-check the release notes (these come from the newsfragments
files though so don't do heavy editing)
- commit these changes
- update "relnotes.txt"
- update all mentions of 1.14.0 -> 1.15.0
- update "previous release" statement and date
- summarize major changes
- commit it
- update "CREDITS"
- are there any new contributors in this release?
- one way: git log release-1.14.0.. | grep Author | sort | uniq
- commit it
- update "docs/known_issues.rst" if appropriate
- update "docs/INSTALL.rst" references to the new release
- Push the branch to github
@ -82,25 +90,36 @@ they will need to evaluate which contributors' signatures they trust.
- (all steps above are completed)
- sign the release
- git tag -s -u 0xE34E62D06D0E69CFCA4179FFBDE0D31D68666A7A -m "release Tahoe-LAFS-1.15.0rc0" tahoe-lafs-1.15.0rc0
- (replace the key-id above with your own)
- build all code locally
- these should all pass:
- tox -e py27,codechecks,docs,integration
- these can fail (ideally they should not of course):
- tox -e deprecations,upcoming-deprecations
- build tarballs
- tox -e tarballs
- confirm it at least exists:
- ls dist/ | grep 1.15.0rc0
- inspect and test the tarballs
- install each in a fresh virtualenv
- run `tahoe` command
- when satisfied, sign the tarballs:
- gpg --pinentry=loopback --armor --sign dist/tahoe_lafs-1.15.0rc0-py2-none-any.whl
- gpg --pinentry=loopback --armor --sign dist/tahoe_lafs-1.15.0rc0.tar.bz2
- gpg --pinentry=loopback --armor --sign dist/tahoe_lafs-1.15.0rc0.tar.gz
- gpg --pinentry=loopback --armor --sign dist/tahoe_lafs-1.15.0rc0.zip
- gpg --pinentry=loopback --armor --detach-sign dist/tahoe_lafs-1.15.0rc0-py2-none-any.whl
- gpg --pinentry=loopback --armor --detach-sign dist/tahoe_lafs-1.15.0rc0.tar.bz2
- gpg --pinentry=loopback --armor --detach-sign dist/tahoe_lafs-1.15.0rc0.tar.gz
- gpg --pinentry=loopback --armor --detach-sign dist/tahoe_lafs-1.15.0rc0.zip
Privileged Contributor
@ -129,6 +148,7 @@ need to be uploaded to https://tahoe-lafs.org in `~source/downloads`
https://tahoe-lafs.org/downloads/ on the Web.
- scp dist/*1.15.0* username@tahoe-lafs.org:/home/source/downloads
- the following developers have access to do this:
- exarkun
- meejah
- warner
@ -137,8 +157,9 @@ For the actual release, the tarball and signature files need to be
uploaded to PyPI as well.
- how to do this?
- (original guide says only "twine upload dist/*")
- (original guide says only `twine upload dist/*`)
- the following developers have access to do this:
- warner
- exarkun (partial?)
- meejah (partial?)

View File

@ -81,9 +81,7 @@ does not offer its disk space to other nodes. To configure other behavior,
use “``tahoe create-node``” or see :doc:`configuration`.
The “``tahoe run``” command above will run the node in the foreground.
On Unix, you can run it in the background instead by using the
``tahoe start``” command. To stop a node started in this way, use
``tahoe stop``”. ``tahoe --help`` gives a summary of all commands.
``tahoe --help`` gives a summary of all commands.
Running a Server or Introducer
@ -99,12 +97,10 @@ and ``--location`` arguments.
To construct an introducer, create a new base directory for it (the name
of the directory is up to you), ``cd`` into it, and run “``tahoe
create-introducer --hostname=example.net .``” (but using the hostname of
your VPS). Now run the introducer using “``tahoe start .``”. After it
your VPS). Now run the introducer using “``tahoe run .``”. After it
starts, it will write a file named ``introducer.furl`` into the
``private/`` subdirectory of that base directory. This file contains the
URL the other nodes must use in order to connect to this introducer.
(Note that “``tahoe run .``” doesn't work for introducers, this is a
known issue: `#937`_.)
You can distribute your Introducer fURL securely to new clients by using
the ``tahoe invite`` command. This will prepare some JSON to send to the
@ -211,10 +207,10 @@ create a new directory and lose the capability to it, then you cannot
access that directory ever again.
The SFTP and FTP frontends
--------------------------
The SFTP frontend
-----------------
You can access your Tahoe-LAFS grid via any SFTP_ or FTP_ client. See
You can access your Tahoe-LAFS grid via any SFTP_ client. See
:doc:`frontends/FTP-and-SFTP` for how to set this up. On most Unix
platforms, you can also use SFTP to plug Tahoe-LAFS into your computer's
local filesystem via ``sshfs``, but see the `FAQ about performance
@ -224,7 +220,6 @@ The SftpFrontend_ page on the wiki has more information about using SFTP with
Tahoe-LAFS.
.. _SFTP: https://en.wikipedia.org/wiki/SSH_file_transfer_protocol
.. _FTP: https://en.wikipedia.org/wiki/File_Transfer_Protocol
.. _FAQ about performance problems: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ#Q23_FUSE
.. _SftpFrontend: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend

View File

@ -201,9 +201,8 @@ log_gatherer.furl = {log_furl}
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
f.write(config)
# on windows, "tahoe start" means: run forever in the foreground,
# but on linux it means daemonize. "tahoe run" is consistent
# between platforms.
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
# "start" command.
protocol = _MagicTextProtocol('introducer running')
transport = _tahoe_runner_optional_coverage(
protocol,
@ -278,9 +277,8 @@ log_gatherer.furl = {log_furl}
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
f.write(config)
# on windows, "tahoe start" means: run forever in the foreground,
# but on linux it means daemonize. "tahoe run" is consistent
# between platforms.
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
# "start" command.
protocol = _MagicTextProtocol('introducer running')
transport = _tahoe_runner_optional_coverage(
protocol,

View File

@ -127,12 +127,12 @@ def test_deep_stats(alice):
dircap_uri,
data={
u"t": u"upload",
u"when_done": u".",
},
files={
u"file": FILE_CONTENTS,
},
)
resp.raise_for_status()
# confirm the file is in the directory
resp = requests.get(

View File

@ -189,10 +189,8 @@ def _run_node(reactor, node_dir, request, magic_text):
magic_text = "client running"
protocol = _MagicTextProtocol(magic_text)
# on windows, "tahoe start" means: run forever in the foreground,
# but on linux it means daemonize. "tahoe run" is consistent
# between platforms.
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
# "start" command.
transport = _tahoe_runner_optional_coverage(
protocol,
reactor,

3
mypy.ini Normal file
View File

@ -0,0 +1,3 @@
[mypy]
ignore_missing_imports = True
plugins=mypy_zope:plugin

0
newsfragments/2920.minor Normal file
View File

0
newsfragments/3384.minor Normal file
View File

View File

@ -0,0 +1 @@
Added 'typechecks' environment for tox running mypy and performing static typechecks.

0
newsfragments/3523.minor Normal file
View File

0
newsfragments/3524.minor Normal file
View File

0
newsfragments/3529.minor Normal file
View File

0
newsfragments/3532.minor Normal file
View File

0
newsfragments/3533.minor Normal file
View File

0
newsfragments/3534.minor Normal file
View File

0
newsfragments/3536.minor Normal file
View File

View File

@ -0,0 +1 @@
The deprecated ``tahoe`` start, restart, stop, and daemonize sub-commands have been removed.

0
newsfragments/3552.minor Normal file
View File

0
newsfragments/3553.minor Normal file
View File

0
newsfragments/3555.minor Normal file
View File

0
newsfragments/3557.minor Normal file
View File

0
newsfragments/3558.minor Normal file
View File

0
newsfragments/3560.minor Normal file
View File

0
newsfragments/3564.minor Normal file
View File

0
newsfragments/3565.minor Normal file
View File

0
newsfragments/3566.minor Normal file
View File

0
newsfragments/3567.minor Normal file
View File

0
newsfragments/3568.minor Normal file
View File

0
newsfragments/3572.minor Normal file
View File

0
newsfragments/3574.minor Normal file
View File

0
newsfragments/3575.minor Normal file
View File

0
newsfragments/3576.minor Normal file
View File

0
newsfragments/3577.minor Normal file
View File

0
newsfragments/3578.minor Normal file
View File

0
newsfragments/3582.minor Normal file
View File

View File

@ -0,0 +1 @@
FTP is no longer supported by Tahoe-LAFS. Please use the SFTP support instead.

1
newsfragments/3587.minor Normal file
View File

@ -0,0 +1 @@

0
newsfragments/3589.minor Normal file
View File

View File

@ -0,0 +1 @@
Fixed issue where redirecting old-style URIs (/uri/?uri=...) didn't work.

0
newsfragments/3591.minor Normal file
View File

0
newsfragments/3594.minor Normal file
View File

0
newsfragments/3595.minor Normal file
View File

View File

@ -23,21 +23,12 @@ python.pkgs.buildPythonPackage rec {
# This list is over-zealous because it's more work to disable individual
# tests with in a module.
# test_system is a lot of integration-style tests that do a lot of real
# networking between many processes. They sometimes fail spuriously.
rm src/allmydata/test/test_system.py
# Many of these tests don't properly skip when i2p or tor dependencies are
# not supplied (and we are not supplying them).
rm src/allmydata/test/test_i2p_provider.py
rm src/allmydata/test/test_connections.py
rm src/allmydata/test/cli/test_create.py
rm src/allmydata/test/test_client.py
rm src/allmydata/test/test_runner.py
# Some eliot code changes behavior based on whether stdout is a tty or not
# and fails when it is not.
rm src/allmydata/test/test_eliotutil.py
'';

View File

@ -63,12 +63,8 @@ install_requires = [
# version of cryptography will *really* be installed.
"cryptography >= 2.6",
# * We need Twisted 10.1.0 for the FTP frontend in order for
# Twisted's FTP server to support asynchronous close.
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
# * The FTP frontend depends on Twisted >= 11.1.0 for
# filepath.Permissions
# * The SFTP frontend and manhole depend on the conch extra. However, we
# can't explicitly declare that without an undesirable dependency on gmpy,
# as explained in ticket #2740.
@ -111,7 +107,9 @@ install_requires = [
# Eliot is contemplating dropping Python 2 support. Stick to a version we
# know works on Python 2.7.
"eliot ~= 1.7",
"eliot ~= 1.7 ; python_version < '3.0'",
# On Python 3, we want a new enough version to support custom JSON encoders.
"eliot >= 1.13.0 ; python_version > '3.0'",
# Pyrsistent 0.17.0 (which we use by way of Eliot) has dropped
# Python 2 entirely; stick to the version known to work for us.
@ -383,10 +381,7 @@ setup(name="tahoe-lafs", # also set in __init__.py
# this version from time to time, but we will do it
# intentionally.
"pyflakes == 2.2.0",
# coverage 5.0 breaks the integration tests in some opaque way.
# This probably needs to be addressed in a more permanent way
# eventually...
"coverage ~= 4.5",
"coverage ~= 5.0",
"mock",
"tox",
"pytest",

View File

@ -14,7 +14,9 @@ __all__ = [
__version__ = "unknown"
try:
from allmydata._version import __version__
# type ignored as it fails in CI
# (https://app.circleci.com/pipelines/github/tahoe-lafs/tahoe-lafs/1647/workflows/60ae95d4-abe8-492c-8a03-1ad3b9e42ed3/jobs/40972)
from allmydata._version import __version__ # type: ignore
except ImportError:
# We're running in a tree that hasn't run update_version, and didn't
# come with a _version.py, so we don't know what our version is.
@ -24,7 +26,9 @@ except ImportError:
full_version = "unknown"
branch = "unknown"
try:
from allmydata._version import full_version, branch
# type ignored as it fails in CI
# (https://app.circleci.com/pipelines/github/tahoe-lafs/tahoe-lafs/1647/workflows/60ae95d4-abe8-492c-8a03-1ad3b9e42ed3/jobs/40972)
from allmydata._version import full_version, branch # type: ignore
except ImportError:
# We're running in a tree that hasn't run update_version, and didn't
# come with a _version.py, so we don't know what our full version or

View File

@ -1,3 +1,14 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
import os
@ -34,10 +45,10 @@ class Blacklist(object):
try:
if self.last_mtime is None or current_mtime > self.last_mtime:
self.entries.clear()
with open(self.blacklist_fn, "r") as f:
with open(self.blacklist_fn, "rb") as f:
for line in f:
line = line.strip()
if not line or line.startswith("#"):
if not line or line.startswith(b"#"):
continue
si_s, reason = line.split(None, 1)
si = base32.a2b(si_s) # must be valid base32

View File

@ -86,12 +86,6 @@ _client_config = configutil.ValidConfiguration(
"shares.total",
"storage.plugins",
),
"ftpd": (
"accounts.file",
"accounts.url",
"enabled",
"port",
),
"storage": (
"debug_discard",
"enabled",
@ -270,7 +264,7 @@ def create_client_from_config(config, _client_factory=None, _introducer_factory=
i2p_provider = create_i2p_provider(reactor, config)
tor_provider = create_tor_provider(reactor, config)
handlers = node.create_connection_handlers(reactor, config, i2p_provider, tor_provider)
handlers = node.create_connection_handlers(config, i2p_provider, tor_provider)
default_connection_handlers, foolscap_connection_handlers = handlers
tub_options = node.create_tub_options(config)
@ -656,7 +650,6 @@ class _Client(node.Node, pollmixin.PollMixin):
raise ValueError("config error: helper is enabled, but tub "
"is not listening ('tub.port=' is empty)")
self.init_helper()
self.init_ftp_server()
self.init_sftp_server()
# If the node sees an exit_trigger file, it will poll every second to see
@ -714,7 +707,7 @@ class _Client(node.Node, pollmixin.PollMixin):
def get_long_nodeid(self):
# this matches what IServer.get_longname() says about us elsewhere
vk_string = ed25519.string_from_verifying_key(self._node_public_key)
return remove_prefix(vk_string, "pub-")
return remove_prefix(vk_string, b"pub-")
def get_long_tubid(self):
return idlib.nodeid_b2a(self.nodeid)
@ -898,10 +891,6 @@ class _Client(node.Node, pollmixin.PollMixin):
if helper_furl in ("None", ""):
helper_furl = None
# FURLs need to be bytes:
if helper_furl is not None:
helper_furl = helper_furl.encode("utf-8")
DEP = self.encoding_params
DEP["k"] = int(self.config.get_config("client", "shares.needed", DEP["k"]))
DEP["n"] = int(self.config.get_config("client", "shares.total", DEP["n"]))
@ -1036,18 +1025,6 @@ class _Client(node.Node, pollmixin.PollMixin):
)
ws.setServiceParent(self)
def init_ftp_server(self):
if self.config.get_config("ftpd", "enabled", False, boolean=True):
accountfile = self.config.get_config("ftpd", "accounts.file", None)
if accountfile:
accountfile = self.config.get_config_path(accountfile)
accounturl = self.config.get_config("ftpd", "accounts.url", None)
ftp_portstr = self.config.get_config("ftpd", "port", "8021")
from allmydata.frontends import ftpd
s = ftpd.FTPServer(self, accountfile, accounturl, ftp_portstr)
s.setServiceParent(self)
def init_sftp_server(self):
if self.config.get_config("sftpd", "enabled", False, boolean=True):
accountfile = self.config.get_config("sftpd", "accounts.file", None)

View File

@ -57,6 +57,10 @@ class CRSEncoder(object):
return defer.succeed((shares, desired_share_ids))
def encode_proposal(self, data, desired_share_ids=None):
raise NotImplementedError()
@implementer(ICodecDecoder)
class CRSDecoder(object):

View File

@ -1,4 +1,15 @@
"""Implementation of the deep stats class."""
"""Implementation of the deep stats class.
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
import math
@ -13,7 +24,7 @@ from allmydata.util import mathutil
class DeepStats(object):
"""Deep stats object.
Holds results of the deep-stats opetation.
Holds results of the deep-stats operation.
Used for json generation in the API."""
# Json API version.
@ -121,7 +132,7 @@ class DeepStats(object):
h[bucket] += 1
def get_results(self):
"""Returns deep-stats resutls."""
"""Returns deep-stats results."""
stats = self.stats.copy()
for key in self.histograms:
h = self.histograms[key]

View File

@ -1,4 +1,16 @@
"""Directory Node implementation."""
"""Directory Node implementation.
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
# Skip dict so it doesn't break things.
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, list, object, range, str, max, min # noqa: F401
from past.builtins import unicode
import time
@ -6,7 +18,6 @@ import time
from zope.interface import implementer
from twisted.internet import defer
from foolscap.api import fireEventually
import json
from allmydata.crypto import aes
from allmydata.deep_stats import DeepStats
@ -19,7 +30,7 @@ from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \
from allmydata.check_results import DeepCheckResults, \
DeepCheckAndRepairResults
from allmydata.monitor import Monitor
from allmydata.util import hashutil, base32, log
from allmydata.util import hashutil, base32, log, jsonbytes as json
from allmydata.util.encodingutil import quote_output, normalize
from allmydata.util.assertutil import precondition
from allmydata.util.netstring import netstring, split_netstring
@ -37,6 +48,8 @@ from eliot.twisted import (
NAME = Field.for_types(
u"name",
# Make sure this works on Python 2; with str, it gets Future str which
# breaks Eliot.
[unicode],
u"The name linking the parent to this node.",
)
@ -179,7 +192,7 @@ class Adder(object):
def modify(self, old_contents, servermap, first_time):
children = self.node._unpack_contents(old_contents)
now = time.time()
for (namex, (child, new_metadata)) in self.entries.iteritems():
for (namex, (child, new_metadata)) in list(self.entries.items()):
name = normalize(namex)
precondition(IFilesystemNode.providedBy(child), child)
@ -205,8 +218,8 @@ class Adder(object):
return new_contents
def _encrypt_rw_uri(writekey, rw_uri):
precondition(isinstance(rw_uri, str), rw_uri)
precondition(isinstance(writekey, str), writekey)
precondition(isinstance(rw_uri, bytes), rw_uri)
precondition(isinstance(writekey, bytes), writekey)
salt = hashutil.mutable_rwcap_salt_hash(rw_uri)
key = hashutil.mutable_rwcap_key_hash(salt, writekey)
@ -221,7 +234,7 @@ def _encrypt_rw_uri(writekey, rw_uri):
def pack_children(childrenx, writekey, deep_immutable=False):
# initial_children must have metadata (i.e. {} instead of None)
children = {}
for (namex, (node, metadata)) in childrenx.iteritems():
for (namex, (node, metadata)) in list(childrenx.items()):
precondition(isinstance(metadata, dict),
"directory creation requires metadata to be a dict, not None", metadata)
children[normalize(namex)] = (node, metadata)
@ -245,18 +258,19 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
If deep_immutable is True, I will require that all my children are deeply
immutable, and will raise a MustBeDeepImmutableError if not.
"""
precondition((writekey is None) or isinstance(writekey, str), writekey)
precondition((writekey is None) or isinstance(writekey, bytes), writekey)
has_aux = isinstance(children, AuxValueDict)
entries = []
for name in sorted(children.keys()):
assert isinstance(name, unicode)
assert isinstance(name, str)
entry = None
(child, metadata) = children[name]
child.raise_error()
if deep_immutable and not child.is_allowed_in_immutable_directory():
raise MustBeDeepImmutableError("child %s is not allowed in an immutable directory" %
quote_output(name, encoding='utf-8'), name)
raise MustBeDeepImmutableError(
"child %r is not allowed in an immutable directory" % (name,),
name)
if has_aux:
entry = children.get_aux(name)
if not entry:
@ -264,26 +278,26 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
assert isinstance(metadata, dict)
rw_uri = child.get_write_uri()
if rw_uri is None:
rw_uri = ""
assert isinstance(rw_uri, str), rw_uri
rw_uri = b""
assert isinstance(rw_uri, bytes), rw_uri
# should be prevented by MustBeDeepImmutableError check above
assert not (rw_uri and deep_immutable)
ro_uri = child.get_readonly_uri()
if ro_uri is None:
ro_uri = ""
assert isinstance(ro_uri, str), ro_uri
ro_uri = b""
assert isinstance(ro_uri, bytes), ro_uri
if writekey is not None:
writecap = netstring(_encrypt_rw_uri(writekey, rw_uri))
else:
writecap = ZERO_LEN_NETSTR
entry = "".join([netstring(name.encode("utf-8")),
entry = b"".join([netstring(name.encode("utf-8")),
netstring(strip_prefix_for_ro(ro_uri, deep_immutable)),
writecap,
netstring(json.dumps(metadata))])
netstring(json.dumps(metadata).encode("utf-8"))])
entries.append(netstring(entry))
return "".join(entries)
return b"".join(entries)
@implementer(IDirectoryNode, ICheckable, IDeepCheckable)
class DirectoryNode(object):
@ -352,9 +366,9 @@ class DirectoryNode(object):
# cleartext. The 'name' is UTF-8 encoded, and should be normalized to NFC.
# The rwcapdata is formatted as:
# pack("16ss32s", iv, AES(H(writekey+iv), plaintext_rw_uri), mac)
assert isinstance(data, str), (repr(data), type(data))
assert isinstance(data, bytes), (repr(data), type(data))
# an empty directory is serialized as an empty string
if data == "":
if data == b"":
return AuxValueDict()
writeable = not self.is_readonly()
mutable = self.is_mutable()
@ -373,7 +387,7 @@ class DirectoryNode(object):
# Therefore we normalize names going both in and out of directories.
name = normalize(namex_utf8.decode("utf-8"))
rw_uri = ""
rw_uri = b""
if writeable:
rw_uri = self._decrypt_rwcapdata(rwcapdata)
@ -384,8 +398,8 @@ class DirectoryNode(object):
# ro_uri is treated in the same way for consistency.
# rw_uri and ro_uri will be either None or a non-empty string.
rw_uri = rw_uri.rstrip(' ') or None
ro_uri = ro_uri.rstrip(' ') or None
rw_uri = rw_uri.rstrip(b' ') or None
ro_uri = ro_uri.rstrip(b' ') or None
try:
child = self._create_and_validate_node(rw_uri, ro_uri, name)
@ -468,7 +482,7 @@ class DirectoryNode(object):
exists a child of the given name, False if not."""
name = normalize(namex)
d = self._read()
d.addCallback(lambda children: children.has_key(name))
d.addCallback(lambda children: name in children)
return d
def _get(self, children, name):
@ -543,7 +557,7 @@ class DirectoryNode(object):
else:
pathx = pathx.split("/")
for p in pathx:
assert isinstance(p, unicode), p
assert isinstance(p, str), p
childnamex = pathx[0]
remaining_pathx = pathx[1:]
if remaining_pathx:
@ -554,9 +568,9 @@ class DirectoryNode(object):
d = self.get_child_and_metadata(childnamex)
return d
def set_uri(self, namex, writecap, readcap, metadata=None, overwrite=True):
precondition(isinstance(writecap, (str,type(None))), writecap)
precondition(isinstance(readcap, (str,type(None))), readcap)
def set_uri(self, namex, writecap, readcap=None, metadata=None, overwrite=True):
precondition(isinstance(writecap, (bytes, type(None))), writecap)
precondition(isinstance(readcap, (bytes, type(None))), readcap)
# We now allow packing unknown nodes, provided they are valid
# for this type of directory.
@ -569,16 +583,16 @@ class DirectoryNode(object):
# this takes URIs
a = Adder(self, overwrite=overwrite,
create_readonly_node=self._create_readonly_node)
for (namex, e) in entries.iteritems():
assert isinstance(namex, unicode), namex
for (namex, e) in entries.items():
assert isinstance(namex, str), namex
if len(e) == 2:
writecap, readcap = e
metadata = None
else:
assert len(e) == 3
writecap, readcap, metadata = e
precondition(isinstance(writecap, (str,type(None))), writecap)
precondition(isinstance(readcap, (str,type(None))), readcap)
precondition(isinstance(writecap, (bytes,type(None))), writecap)
precondition(isinstance(readcap, (bytes,type(None))), readcap)
# We now allow packing unknown nodes, provided they are valid
# for this type of directory.
@ -779,7 +793,7 @@ class DirectoryNode(object):
# in the nodecache) seem to consume about 2000 bytes.
dirkids = []
filekids = []
for name, (child, metadata) in sorted(children.iteritems()):
for name, (child, metadata) in sorted(children.items()):
childpath = path + [name]
if isinstance(child, UnknownNode):
walker.add_node(child, childpath)

View File

@ -1,340 +0,0 @@
from six import ensure_str
from types import NoneType
from zope.interface import implementer
from twisted.application import service, strports
from twisted.internet import defer
from twisted.internet.interfaces import IConsumer
from twisted.cred import portal
from twisted.python import filepath
from twisted.protocols import ftp
from allmydata.interfaces import IDirectoryNode, ExistingChildError, \
NoSuchChildError
from allmydata.immutable.upload import FileHandle
from allmydata.util.fileutil import EncryptedTemporaryFile
from allmydata.util.assertutil import precondition
@implementer(ftp.IReadFile)
class ReadFile(object):
def __init__(self, node):
self.node = node
def send(self, consumer):
d = self.node.read(consumer)
return d # when consumed
@implementer(IConsumer)
class FileWriter(object):
def registerProducer(self, producer, streaming):
if not streaming:
raise NotImplementedError("Non-streaming producer not supported.")
# we write the data to a temporary file, since Tahoe can't do
# streaming upload yet.
self.f = EncryptedTemporaryFile()
return None
def unregisterProducer(self):
# the upload actually happens in WriteFile.close()
pass
def write(self, data):
self.f.write(data)
@implementer(ftp.IWriteFile)
class WriteFile(object):
def __init__(self, parent, childname, convergence):
self.parent = parent
self.childname = childname
self.convergence = convergence
def receive(self):
self.c = FileWriter()
return defer.succeed(self.c)
def close(self):
u = FileHandle(self.c.f, self.convergence)
d = self.parent.add_file(self.childname, u)
return d
class NoParentError(Exception):
pass
# filepath.Permissions was added in Twisted-11.1.0, which we require. Twisted
# <15.0.0 expected an int, and only does '&' on it. Twisted >=15.0.0 expects
# a filepath.Permissions. This satisfies both.
class IntishPermissions(filepath.Permissions):
def __init__(self, statModeInt):
self._tahoe_statModeInt = statModeInt
filepath.Permissions.__init__(self, statModeInt)
def __and__(self, other):
return self._tahoe_statModeInt & other
@implementer(ftp.IFTPShell)
class Handler(object):
def __init__(self, client, rootnode, username, convergence):
self.client = client
self.root = rootnode
self.username = username
self.convergence = convergence
def makeDirectory(self, path):
d = self._get_root(path)
d.addCallback(lambda root_and_path:
self._get_or_create_directories(root_and_path[0], root_and_path[1]))
return d
def _get_or_create_directories(self, node, path):
if not IDirectoryNode.providedBy(node):
# unfortunately it is too late to provide the name of the
# blocking directory in the error message.
raise ftp.FileExistsError("cannot create directory because there "
"is a file in the way")
if not path:
return defer.succeed(node)
d = node.get(path[0])
def _maybe_create(f):
f.trap(NoSuchChildError)
return node.create_subdirectory(path[0])
d.addErrback(_maybe_create)
d.addCallback(self._get_or_create_directories, path[1:])
return d
def _get_parent(self, path):
# fire with (parentnode, childname)
path = [unicode(p) for p in path]
if not path:
raise NoParentError
childname = path[-1]
d = self._get_root(path)
def _got_root(root_and_path):
(root, path) = root_and_path
if not path:
raise NoParentError
return root.get_child_at_path(path[:-1])
d.addCallback(_got_root)
def _got_parent(parent):
return (parent, childname)
d.addCallback(_got_parent)
return d
def _remove_thing(self, path, must_be_directory=False, must_be_file=False):
d = defer.maybeDeferred(self._get_parent, path)
def _convert_error(f):
f.trap(NoParentError)
raise ftp.PermissionDeniedError("cannot delete root directory")
d.addErrback(_convert_error)
def _got_parent(parent_and_childname):
(parent, childname) = parent_and_childname
d = parent.get(childname)
def _got_child(child):
if must_be_directory and not IDirectoryNode.providedBy(child):
raise ftp.IsNotADirectoryError("rmdir called on a file")
if must_be_file and IDirectoryNode.providedBy(child):
raise ftp.IsADirectoryError("rmfile called on a directory")
return parent.delete(childname)
d.addCallback(_got_child)
d.addErrback(self._convert_error)
return d
d.addCallback(_got_parent)
return d
def removeDirectory(self, path):
return self._remove_thing(path, must_be_directory=True)
def removeFile(self, path):
return self._remove_thing(path, must_be_file=True)
def rename(self, fromPath, toPath):
# the target directory must already exist
d = self._get_parent(fromPath)
def _got_from_parent(fromparent_and_childname):
(fromparent, childname) = fromparent_and_childname
d = self._get_parent(toPath)
d.addCallback(lambda toparent_and_tochildname:
fromparent.move_child_to(childname,
toparent_and_tochildname[0], toparent_and_tochildname[1],
overwrite=False))
return d
d.addCallback(_got_from_parent)
d.addErrback(self._convert_error)
return d
def access(self, path):
# we allow access to everything that exists. We are required to raise
# an error for paths that don't exist: FTP clients (at least ncftp)
# uses this to decide whether to mkdir or not.
d = self._get_node_and_metadata_for_path(path)
d.addErrback(self._convert_error)
d.addCallback(lambda res: None)
return d
def _convert_error(self, f):
if f.check(NoSuchChildError):
childname = f.value.args[0].encode("utf-8")
msg = "'%s' doesn't exist" % childname
raise ftp.FileNotFoundError(msg)
if f.check(ExistingChildError):
msg = f.value.args[0].encode("utf-8")
raise ftp.FileExistsError(msg)
return f
def _get_root(self, path):
# return (root, remaining_path)
path = [unicode(p) for p in path]
if path and path[0] == "uri":
d = defer.maybeDeferred(self.client.create_node_from_uri,
str(path[1]))
d.addCallback(lambda root: (root, path[2:]))
else:
d = defer.succeed((self.root,path))
return d
def _get_node_and_metadata_for_path(self, path):
d = self._get_root(path)
def _got_root(root_and_path):
(root,path) = root_and_path
if path:
return root.get_child_and_metadata_at_path(path)
else:
return (root,{})
d.addCallback(_got_root)
return d
def _populate_row(self, keys, childnode_and_metadata):
(childnode, metadata) = childnode_and_metadata
values = []
isdir = bool(IDirectoryNode.providedBy(childnode))
for key in keys:
if key == "size":
if isdir:
value = 0
else:
value = childnode.get_size() or 0
elif key == "directory":
value = isdir
elif key == "permissions":
# Twisted-14.0.2 (and earlier) expected an int, and used it
# in a rendering function that did (mode & NUMBER).
# Twisted-15.0.0 expects a
# twisted.python.filepath.Permissions , and calls its
# .shorthand() method. This provides both.
value = IntishPermissions(0o600)
elif key == "hardlinks":
value = 1
elif key == "modified":
# follow sftpd convention (i.e. linkmotime in preference to mtime)
if "linkmotime" in metadata.get("tahoe", {}):
value = metadata["tahoe"]["linkmotime"]
else:
value = metadata.get("mtime", 0)
elif key == "owner":
value = self.username
elif key == "group":
value = self.username
else:
value = "??"
values.append(value)
return values
def stat(self, path, keys=()):
# for files only, I think
d = self._get_node_and_metadata_for_path(path)
def _render(node_and_metadata):
(node, metadata) = node_and_metadata
assert not IDirectoryNode.providedBy(node)
return self._populate_row(keys, (node,metadata))
d.addCallback(_render)
d.addErrback(self._convert_error)
return d
def list(self, path, keys=()):
# the interface claims that path is a list of unicodes, but in
# practice it is not
d = self._get_node_and_metadata_for_path(path)
def _list(node_and_metadata):
(node, metadata) = node_and_metadata
if IDirectoryNode.providedBy(node):
return node.list()
return { path[-1]: (node, metadata) } # need last-edge metadata
d.addCallback(_list)
def _render(children):
results = []
for (name, childnode) in children.iteritems():
# the interface claims that the result should have a unicode
# object as the name, but it fails unless you give it a
# bytestring
results.append( (name.encode("utf-8"),
self._populate_row(keys, childnode) ) )
return results
d.addCallback(_render)
d.addErrback(self._convert_error)
return d
def openForReading(self, path):
d = self._get_node_and_metadata_for_path(path)
d.addCallback(lambda node_and_metadata: ReadFile(node_and_metadata[0]))
d.addErrback(self._convert_error)
return d
def openForWriting(self, path):
path = [unicode(p) for p in path]
if not path:
raise ftp.PermissionDeniedError("cannot STOR to root directory")
childname = path[-1]
d = self._get_root(path)
def _got_root(root_and_path):
(root, path) = root_and_path
if not path:
raise ftp.PermissionDeniedError("cannot STOR to root directory")
return root.get_child_at_path(path[:-1])
d.addCallback(_got_root)
def _got_parent(parent):
return WriteFile(parent, childname, self.convergence)
d.addCallback(_got_parent)
return d
from allmydata.frontends.auth import AccountURLChecker, AccountFileChecker, NeedRootcapLookupScheme
@implementer(portal.IRealm)
class Dispatcher(object):
def __init__(self, client):
self.client = client
def requestAvatar(self, avatarID, mind, interface):
assert interface == ftp.IFTPShell
rootnode = self.client.create_node_from_uri(avatarID.rootcap)
convergence = self.client.convergence
s = Handler(self.client, rootnode, avatarID.username, convergence)
def logout(): pass
return (interface, s, None)
class FTPServer(service.MultiService):
def __init__(self, client, accountfile, accounturl, ftp_portstr):
precondition(isinstance(accountfile, (unicode, NoneType)), accountfile)
service.MultiService.__init__(self)
r = Dispatcher(client)
p = portal.Portal(r)
if accountfile:
c = AccountFileChecker(self, accountfile)
p.registerChecker(c)
if accounturl:
c = AccountURLChecker(self, accounturl)
p.registerChecker(c)
if not accountfile and not accounturl:
# we could leave this anonymous, with just the /uri/CAP form
raise NeedRootcapLookupScheme("must provide some translation")
f = ftp.FTPFactory(p)
# strports requires a native string.
ftp_portstr = ensure_str(ftp_portstr)
s = strports.service(ftp_portstr, f)
s.setServiceParent(self)

View File

@ -1,6 +1,5 @@
import six
import heapq, traceback, array, stat, struct
from types import NoneType
from stat import S_IFREG, S_IFDIR
from time import time, strftime, localtime
@ -267,7 +266,7 @@ def _attrs_to_metadata(attrs):
def _direntry_for(filenode_or_parent, childname, filenode=None):
precondition(isinstance(childname, (unicode, NoneType)), childname=childname)
precondition(isinstance(childname, (unicode, type(None))), childname=childname)
if childname is None:
filenode_or_parent = filenode
@ -672,7 +671,7 @@ class GeneralSFTPFile(PrefixingLogMixin):
self.log(".open(parent=%r, childname=%r, filenode=%r, metadata=%r)" %
(parent, childname, filenode, metadata), level=OPERATIONAL)
precondition(isinstance(childname, (unicode, NoneType)), childname=childname)
precondition(isinstance(childname, (unicode, type(None))), childname=childname)
precondition(filenode is None or IFileNode.providedBy(filenode), filenode=filenode)
precondition(not self.closed, sftpfile=self)
@ -1194,7 +1193,7 @@ class SFTPUserHandler(ConchUser, PrefixingLogMixin):
request = "._sync_heisenfiles(%r, %r, ignore=%r)" % (userpath, direntry, ignore)
self.log(request, level=OPERATIONAL)
_assert(isinstance(userpath, str) and isinstance(direntry, (str, NoneType)),
_assert(isinstance(userpath, str) and isinstance(direntry, (str, type(None))),
userpath=userpath, direntry=direntry)
files = []
@ -1219,7 +1218,7 @@ class SFTPUserHandler(ConchUser, PrefixingLogMixin):
def _remove_heisenfile(self, userpath, parent, childname, file_to_remove):
if noisy: self.log("._remove_heisenfile(%r, %r, %r, %r)" % (userpath, parent, childname, file_to_remove), level=NOISY)
_assert(isinstance(userpath, str) and isinstance(childname, (unicode, NoneType)),
_assert(isinstance(userpath, str) and isinstance(childname, (unicode, type(None))),
userpath=userpath, childname=childname)
direntry = _direntry_for(parent, childname)
@ -1246,7 +1245,7 @@ class SFTPUserHandler(ConchUser, PrefixingLogMixin):
(existing_file, userpath, flags, _repr_flags(flags), parent, childname, filenode, metadata),
level=NOISY)
_assert((isinstance(userpath, str) and isinstance(childname, (unicode, NoneType)) and
_assert((isinstance(userpath, str) and isinstance(childname, (unicode, type(None))) and
(metadata is None or 'no-write' in metadata)),
userpath=userpath, childname=childname, metadata=metadata)
@ -1979,7 +1978,7 @@ class SFTPServer(service.MultiService):
def __init__(self, client, accountfile, accounturl,
sftp_portstr, pubkey_file, privkey_file):
precondition(isinstance(accountfile, (unicode, NoneType)), accountfile)
precondition(isinstance(accountfile, (unicode, type(None))), accountfile)
precondition(isinstance(pubkey_file, unicode), pubkey_file)
precondition(isinstance(privkey_file, unicode), privkey_file)
service.MultiService.__init__(self)

View File

@ -9,6 +9,7 @@ from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from six import ensure_str
import time
now = time.time
@ -98,7 +99,7 @@ class ShareFinder(object):
# internal methods
def loop(self):
pending_s = ",".join([rt.server.get_name()
pending_s = ",".join([ensure_str(rt.server.get_name())
for rt in self.pending_requests]) # sort?
self.log(format="ShareFinder loop: running=%(running)s"
" hungry=%(hungry)s, pending=%(pending)s",

View File

@ -255,11 +255,11 @@ class Encoder(object):
# captures the slot, not the value
#d.addCallback(lambda res: self.do_segment(i))
# use this form instead:
d.addCallback(lambda res, i=i: self._encode_segment(i))
d.addCallback(lambda res, i=i: self._encode_segment(i, is_tail=False))
d.addCallback(self._send_segment, i)
d.addCallback(self._turn_barrier)
last_segnum = self.num_segments - 1
d.addCallback(lambda res: self._encode_tail_segment(last_segnum))
d.addCallback(lambda res: self._encode_segment(last_segnum, is_tail=True))
d.addCallback(self._send_segment, last_segnum)
d.addCallback(self._turn_barrier)
@ -317,8 +317,24 @@ class Encoder(object):
dl.append(d)
return self._gather_responses(dl)
def _encode_segment(self, segnum):
codec = self._codec
def _encode_segment(self, segnum, is_tail):
"""
Encode one segment of input into the configured number of shares.
:param segnum: Ostensibly, the number of the segment to encode. In
reality, this parameter is ignored and the *next* segment is
encoded and returned.
:param bool is_tail: ``True`` if this is the last segment, ``False``
otherwise.
:return: A ``Deferred`` which fires with a two-tuple. The first
element is a list of string-y objects representing the encoded
segment data for one of the shares. The second element is a list
of integers giving the share numbers of the shares in the first
element.
"""
codec = self._tail_codec if is_tail else self._codec
start = time.time()
# the ICodecEncoder API wants to receive a total of self.segment_size
@ -350,9 +366,11 @@ class Encoder(object):
# footprint to 430KiB at the expense of more hash-tree overhead.
d = self._gather_data(self.required_shares, input_piece_size,
crypttext_segment_hasher)
crypttext_segment_hasher, allow_short=is_tail)
def _done_gathering(chunks):
for c in chunks:
# If is_tail then a short trailing chunk will have been padded
# by _gather_data
assert len(c) == input_piece_size
self._crypttext_hashes.append(crypttext_segment_hasher.digest())
# during this call, we hit 5*segsize memory
@ -365,31 +383,6 @@ class Encoder(object):
d.addCallback(_done)
return d
def _encode_tail_segment(self, segnum):
start = time.time()
codec = self._tail_codec
input_piece_size = codec.get_block_size()
crypttext_segment_hasher = hashutil.crypttext_segment_hasher()
d = self._gather_data(self.required_shares, input_piece_size,
crypttext_segment_hasher, allow_short=True)
def _done_gathering(chunks):
for c in chunks:
# a short trailing chunk will have been padded by
# _gather_data
assert len(c) == input_piece_size
self._crypttext_hashes.append(crypttext_segment_hasher.digest())
return codec.encode(chunks)
d.addCallback(_done_gathering)
def _done(res):
elapsed = time.time() - start
self._times["cumulative_encoding"] += elapsed
return res
d.addCallback(_done)
return d
def _gather_data(self, num_chunks, input_chunk_size,
crypttext_segment_hasher,
allow_short=False):

View File

@ -19,7 +19,7 @@ from twisted.protocols import basic
from allmydata.interfaces import IImmutableFileNode, ICheckable
from allmydata.uri import LiteralFileURI
@implementer(IImmutableFileNode, ICheckable)
class _ImmutableFileNodeBase(object):
def get_write_uri(self):
@ -56,6 +56,7 @@ class _ImmutableFileNodeBase(object):
return not self == other
@implementer(IImmutableFileNode, ICheckable)
class LiteralFileNode(_ImmutableFileNodeBase):
def __init__(self, filecap):

View File

@ -141,7 +141,7 @@ class CHKCheckerAndUEBFetcher(object):
@implementer(interfaces.RICHKUploadHelper)
class CHKUploadHelper(Referenceable, upload.CHKUploader):
class CHKUploadHelper(Referenceable, upload.CHKUploader): # type: ignore # warner/foolscap#78
"""I am the helper-server -side counterpart to AssistedUploader. I handle
peer selection, encoding, and share pushing. I read ciphertext from the
remote AssistedUploader.
@ -499,10 +499,13 @@ class LocalCiphertextReader(AskUntilSuccessMixin):
# ??. I'm not sure if it makes sense to forward the close message.
return self.call("close")
# https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3561
def set_upload_status(self, upload_status):
raise NotImplementedError
@implementer(interfaces.RIHelper, interfaces.IStatsProducer)
class Helper(Referenceable):
class Helper(Referenceable): # type: ignore # warner/foolscap#78
"""
:ivar dict[bytes, CHKUploadHelper] _active_uploads: For any uploads which
have been started but not finished, a mapping from storage index to the

View File

@ -11,20 +11,32 @@ from future.utils import PY2, native_str
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from past.builtins import long, unicode
from six import ensure_str
try:
from typing import List
except ImportError:
pass
import os, time, weakref, itertools
import attr
from zope.interface import implementer
from twisted.python import failure
from twisted.internet import defer
from twisted.application import service
from foolscap.api import Referenceable, Copyable, RemoteCopy, fireEventually
from foolscap.api import Referenceable, Copyable, RemoteCopy
from allmydata.crypto import aes
from allmydata.util.hashutil import file_renewal_secret_hash, \
file_cancel_secret_hash, bucket_renewal_secret_hash, \
bucket_cancel_secret_hash, plaintext_hasher, \
storage_index_hash, plaintext_segment_hasher, convergence_hasher
from allmydata.util.deferredutil import timeout_call
from allmydata.util.deferredutil import (
timeout_call,
until,
)
from allmydata import hashtree, uri
from allmydata.storage.server import si_b2a
from allmydata.immutable import encode
@ -385,6 +397,9 @@ class PeerSelector(object):
)
return self.happiness_mappings
def add_peers(self, peerids=None):
raise NotImplementedError
class _QueryStatistics(object):
@ -896,13 +911,45 @@ class Tahoe2ServerSelector(log.PrefixingLogMixin):
raise UploadUnhappinessError(msg)
@attr.s
class _Accum(object):
"""
Accumulate up to some known amount of ciphertext.
:ivar remaining: The number of bytes still expected.
:ivar ciphertext: The bytes accumulated so far.
"""
remaining = attr.ib(validator=attr.validators.instance_of(int)) # type: int
ciphertext = attr.ib(default=attr.Factory(list)) # type: List[bytes]
def extend(self,
size, # type: int
ciphertext, # type: List[bytes]
):
"""
Accumulate some more ciphertext.
:param size: The amount of data the new ciphertext represents towards
the goal. This may be more than the actual size of the given
ciphertext if the source has run out of data.
:param ciphertext: The new ciphertext to accumulate.
"""
self.remaining -= size
self.ciphertext.extend(ciphertext)
@implementer(IEncryptedUploadable)
class EncryptAnUploadable(object):
"""This is a wrapper that takes an IUploadable and provides
IEncryptedUploadable."""
CHUNKSIZE = 50*1024
def __init__(self, original, log_parent=None, progress=None):
def __init__(self, original, log_parent=None, progress=None, chunk_size=None):
"""
:param chunk_size: The number of bytes to read from the uploadable at a
time, or None for some default.
"""
precondition(original.default_params_set,
"set_default_encoding_parameters not called on %r before wrapping with EncryptAnUploadable" % (original,))
self.original = IUploadable(original)
@ -916,6 +963,8 @@ class EncryptAnUploadable(object):
self._ciphertext_bytes_read = 0
self._status = None
self._progress = progress
if chunk_size is not None:
self.CHUNKSIZE = chunk_size
def set_upload_status(self, upload_status):
self._status = IUploadStatus(upload_status)
@ -1022,47 +1071,53 @@ class EncryptAnUploadable(object):
# and size
d.addCallback(lambda ignored: self.get_size())
d.addCallback(lambda ignored: self._get_encryptor())
# then fetch and encrypt the plaintext. The unusual structure here
# (passing a Deferred *into* a function) is needed to avoid
# overflowing the stack: Deferreds don't optimize out tail recursion.
# We also pass in a list, to which _read_encrypted will append
# ciphertext.
ciphertext = []
d2 = defer.Deferred()
d.addCallback(lambda ignored:
self._read_encrypted(length, ciphertext, hash_only, d2))
d.addCallback(lambda ignored: d2)
accum = _Accum(length)
def action():
"""
Read some bytes into the accumulator.
"""
return self._read_encrypted(accum, hash_only)
def condition():
"""
Check to see if the accumulator has all the data.
"""
return accum.remaining == 0
d.addCallback(lambda ignored: until(action, condition))
d.addCallback(lambda ignored: accum.ciphertext)
return d
def _read_encrypted(self, remaining, ciphertext, hash_only, fire_when_done):
if not remaining:
fire_when_done.callback(ciphertext)
return None
def _read_encrypted(self,
ciphertext_accum, # type: _Accum
hash_only, # type: bool
):
# type: (...) -> defer.Deferred
"""
Read the next chunk of plaintext, encrypt it, and extend the accumulator
with the resulting ciphertext.
"""
# tolerate large length= values without consuming a lot of RAM by
# reading just a chunk (say 50kB) at a time. This only really matters
# when hash_only==True (i.e. resuming an interrupted upload), since
# that's the case where we will be skipping over a lot of data.
size = min(remaining, self.CHUNKSIZE)
remaining = remaining - size
size = min(ciphertext_accum.remaining, self.CHUNKSIZE)
# read a chunk of plaintext..
d = defer.maybeDeferred(self.original.read, size)
# N.B.: if read() is synchronous, then since everything else is
# actually synchronous too, we'd blow the stack unless we stall for a
# tick. Once you accept a Deferred from IUploadable.read(), you must
# be prepared to have it fire immediately too.
d.addCallback(fireEventually)
def _good(plaintext):
# and encrypt it..
# o/' over the fields we go, hashing all the way, sHA! sHA! sHA! o/'
ct = self._hash_and_encrypt_plaintext(plaintext, hash_only)
ciphertext.extend(ct)
self._read_encrypted(remaining, ciphertext, hash_only,
fire_when_done)
def _err(why):
fire_when_done.errback(why)
# Intentionally tell the accumulator about the expected size, not
# the actual size. If we run out of data we still want remaining
# to drop otherwise it will never reach 0 and the loop will never
# end.
ciphertext_accum.extend(size, ct)
d.addCallback(_good)
d.addErrback(_err)
return None
return d
def _hash_and_encrypt_plaintext(self, data, hash_only):
assert isinstance(data, (tuple, list)), type(data)
@ -1423,7 +1478,7 @@ class LiteralUploader(object):
return self._status
@implementer(RIEncryptedUploadable)
class RemoteEncryptedUploadable(Referenceable):
class RemoteEncryptedUploadable(Referenceable): # type: ignore # warner/foolscap#78
def __init__(self, encrypted_uploadable, upload_status):
self._eu = IEncryptedUploadable(encrypted_uploadable)
@ -1825,7 +1880,7 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
def startService(self):
service.MultiService.startService(self)
if self._helper_furl:
self.parent.tub.connectTo(self._helper_furl,
self.parent.tub.connectTo(ensure_str(self._helper_furl),
self._got_helper)
def _got_helper(self, helper):

View File

@ -681,7 +681,7 @@ class IURI(Interface):
passing into init_from_string."""
class IVerifierURI(Interface, IURI):
class IVerifierURI(IURI):
def init_from_string(uri):
"""Accept a string (as created by my to_string() method) and populate
this instance with its data. I am not normally called directly,
@ -748,7 +748,7 @@ class IProgress(Interface):
"Current amount of progress (in percentage)"
)
def set_progress(self, value):
def set_progress(value):
"""
Sets the current amount of progress.
@ -756,7 +756,7 @@ class IProgress(Interface):
set_progress_total.
"""
def set_progress_total(self, value):
def set_progress_total(value):
"""
Sets the total amount of expected progress
@ -859,12 +859,6 @@ class IPeerSelector(Interface):
peer selection begins.
"""
def confirm_share_allocation(peerid, shnum):
"""
Confirm that an allocated peer=>share pairing has been
successfully established.
"""
def add_peers(peerids=set):
"""
Update my internal state to include the peers in peerids as
@ -1824,11 +1818,6 @@ class IEncoder(Interface):
willing to receive data.
"""
def set_size(size):
"""Specify the number of bytes that will be encoded. This must be
peformed before get_serialized_params() can be called.
"""
def set_encrypted_uploadable(u):
"""Provide a source of encrypted upload data. 'u' must implement
IEncryptedUploadable.
@ -3141,3 +3130,24 @@ class IAnnounceableStorageServer(Interface):
:type: ``IReferenceable`` provider
"""
)
class IAddressFamily(Interface):
"""
Support for one specific address family.
This stretches the definition of address family to include things like Tor
and I2P.
"""
def get_listener():
"""
Return a string endpoint description or an ``IStreamServerEndpoint``.
This would be named ``get_server_endpoint`` if not for historical
reasons.
"""
def get_client_endpoint():
"""
Return an ``IStreamClientEndpoint``.
"""

View File

@ -11,12 +11,12 @@ if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from past.builtins import long
from six import ensure_text
from six import ensure_text, ensure_str
import time
from zope.interface import implementer
from twisted.application import service
from foolscap.api import Referenceable, eventually
from foolscap.api import Referenceable
from allmydata.interfaces import InsufficientVersionError
from allmydata.introducer.interfaces import IIntroducerClient, \
RIIntroducerSubscriberClient_v2
@ -24,6 +24,9 @@ from allmydata.introducer.common import sign_to_foolscap, unsign_from_foolscap,\
get_tubid_string_from_ann
from allmydata.util import log, yamlutil, connection_status
from allmydata.util.rrefutil import add_version_to_remote_reference
from allmydata.util.observer import (
ObserverList,
)
from allmydata.crypto.error import BadSignature
from allmydata.util.assertutil import precondition
@ -39,8 +42,6 @@ class IntroducerClient(service.Service, Referenceable):
nickname, my_version, oldest_supported,
sequencer, cache_filepath):
self._tub = tub
if isinstance(introducer_furl, str):
introducer_furl = introducer_furl.encode("utf-8")
self.introducer_furl = introducer_furl
assert isinstance(nickname, str)
@ -64,8 +65,7 @@ class IntroducerClient(service.Service, Referenceable):
self._publisher = None
self._since = None
self._local_subscribers = [] # (servicename,cb,args,kwargs) tuples
self._subscribed_service_names = set()
self._local_subscribers = {} # {servicename: ObserverList}
self._subscriptions = set() # requests we've actually sent
# _inbound_announcements remembers one announcement per
@ -96,7 +96,7 @@ class IntroducerClient(service.Service, Referenceable):
def startService(self):
service.Service.startService(self)
self._introducer_error = None
rc = self._tub.connectTo(self.introducer_furl, self._got_introducer)
rc = self._tub.connectTo(ensure_str(self.introducer_furl), self._got_introducer)
self._introducer_reconnector = rc
def connect_failed(failure):
self.log("Initial Introducer connection failed: perhaps it's down",
@ -178,22 +178,22 @@ class IntroducerClient(service.Service, Referenceable):
kwargs["facility"] = "tahoe.introducer.client"
return log.msg(*args, **kwargs)
def subscribe_to(self, service_name, cb, *args, **kwargs):
self._local_subscribers.append( (service_name,cb,args,kwargs) )
self._subscribed_service_names.add(service_name)
def subscribe_to(self, service_name, callback, *args, **kwargs):
obs = self._local_subscribers.setdefault(service_name, ObserverList())
obs.subscribe(lambda key_s, ann: callback(key_s, ann, *args, **kwargs))
self._maybe_subscribe()
for index,(ann,key_s,when) in list(self._inbound_announcements.items()):
precondition(isinstance(key_s, bytes), key_s)
servicename = index[0]
if servicename == service_name:
eventually(cb, key_s, ann, *args, **kwargs)
obs.notify(key_s, ann)
def _maybe_subscribe(self):
if not self._publisher:
self.log("want to subscribe, but no introducer yet",
level=log.NOISY)
return
for service_name in self._subscribed_service_names:
for service_name in self._local_subscribers:
if service_name in self._subscriptions:
continue
self._subscriptions.add(service_name)
@ -272,7 +272,7 @@ class IntroducerClient(service.Service, Referenceable):
precondition(isinstance(key_s, bytes), key_s)
self._debug_counts["inbound_announcement"] += 1
service_name = str(ann["service-name"])
if service_name not in self._subscribed_service_names:
if service_name not in self._local_subscribers:
self.log("announcement for a service we don't care about [%s]"
% (service_name,), level=log.UNUSUAL, umid="dIpGNA")
self._debug_counts["wrong_service"] += 1
@ -343,9 +343,9 @@ class IntroducerClient(service.Service, Referenceable):
def _deliver_announcements(self, key_s, ann):
precondition(isinstance(key_s, bytes), key_s)
service_name = str(ann["service-name"])
for (service_name2,cb,args,kwargs) in self._local_subscribers:
if service_name2 == service_name:
eventually(cb, key_s, ann, *args, **kwargs)
obs = self._local_subscribers.get(service_name)
if obs is not None:
obs.notify(key_s, ann)
def connection_status(self):
assert self.running # startService builds _introducer_reconnector

View File

@ -73,7 +73,7 @@ class IIntroducerClient(Interface):
publish their services to the rest of the world, and I help them learn
about services available on other nodes."""
def publish(service_name, ann, signing_key=None):
def publish(service_name, ann, signing_key):
"""Publish the given announcement dictionary (which must be
JSON-serializable), plus some additional keys, to the world.
@ -83,8 +83,7 @@ class IIntroducerClient(Interface):
the signing_key, if present, otherwise it is derived from the
'anonymous-storage-FURL' key.
If signing_key= is set to an instance of SigningKey, it will be
used to sign the announcement."""
signing_key (a SigningKey) will be used to sign the announcement."""
def subscribe_to(service_name, callback, *args, **kwargs):
"""Call this if you will eventually want to use services with the

View File

@ -15,6 +15,12 @@ from past.builtins import long
from six import ensure_text
import time, os.path, textwrap
try:
from typing import Any, Dict, Union
except ImportError:
pass
from zope.interface import implementer
from twisted.application import service
from twisted.internet import defer
@ -70,7 +76,7 @@ def create_introducer(basedir=u"."):
i2p_provider = create_i2p_provider(reactor, config)
tor_provider = create_tor_provider(reactor, config)
default_connection_handlers, foolscap_connection_handlers = create_connection_handlers(reactor, config, i2p_provider, tor_provider)
default_connection_handlers, foolscap_connection_handlers = create_connection_handlers(config, i2p_provider, tor_provider)
tub_options = create_tub_options(config)
# we don't remember these because the Introducer doesn't make
@ -147,10 +153,12 @@ class IntroducerService(service.MultiService, Referenceable):
name = "introducer"
# v1 is the original protocol, added in 1.0 (but only advertised starting
# in 1.3), removed in 1.12. v2 is the new signed protocol, added in 1.10
VERSION = { #"http://allmydata.org/tahoe/protocols/introducer/v1": { },
# TODO: reconcile bytes/str for keys
VERSION = {
#"http://allmydata.org/tahoe/protocols/introducer/v1": { },
b"http://allmydata.org/tahoe/protocols/introducer/v2": { },
b"application-version": allmydata.__full_version__.encode("utf-8"),
}
} # type: Dict[Union[bytes, str], Any]
def __init__(self):
service.MultiService.__init__(self)

View File

@ -564,7 +564,7 @@ class MutableFileNode(object):
return d
def upload(self, new_contents, servermap):
def upload(self, new_contents, servermap, progress=None):
"""
I overwrite the contents of the best recoverable version of this
mutable file with new_contents, using servermap instead of
@ -951,7 +951,7 @@ class MutableFileVersion(object):
return self._servermap.size_of_version(self._version)
def download_to_data(self, fetch_privkey=False, progress=None):
def download_to_data(self, fetch_privkey=False, progress=None): # type: ignore # fixme
"""
I return a Deferred that fires with the contents of this
readable object as a byte string.
@ -1205,3 +1205,7 @@ class MutableFileVersion(object):
self._servermap,
mode=mode)
return u.update()
# https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3562
def get_servermap(self):
raise NotImplementedError

View File

@ -23,6 +23,11 @@ from base64 import b32decode, b32encode
from errno import ENOENT, EPERM
from warnings import warn
try:
from typing import Union
except ImportError:
pass
import attr
# On Python 2 this will be the backported package.
@ -273,6 +278,11 @@ def _error_about_old_config_files(basedir, generated_files):
raise e
def ensure_text_and_abspath_expanduser_unicode(basedir):
# type: (Union[bytes, str]) -> str
return abspath_expanduser_unicode(ensure_text(basedir))
@attr.s
class _Config(object):
"""
@ -300,8 +310,8 @@ class _Config(object):
config = attr.ib(validator=attr.validators.instance_of(configparser.ConfigParser))
portnum_fname = attr.ib()
_basedir = attr.ib(
converter=lambda basedir: abspath_expanduser_unicode(ensure_text(basedir)),
)
converter=ensure_text_and_abspath_expanduser_unicode,
) # type: str
config_path = attr.ib(
validator=attr.validators.optional(
attr.validators.instance_of(FilePath),
@ -616,28 +626,20 @@ def _make_tcp_handler():
return default()
def create_connection_handlers(reactor, config, i2p_provider, tor_provider):
def create_default_connection_handlers(config, handlers):
"""
:returns: 2-tuple of default_connection_handlers, foolscap_connection_handlers
:return: A dictionary giving the default connection handlers. The keys
are strings like "tcp" and the values are strings like "tor" or
``None``.
"""
reveal_ip = config.get_config("node", "reveal-IP-address", True, boolean=True)
# We store handlers for everything. None means we were unable to
# create that handler, so hints which want it will be ignored.
handlers = foolscap_connection_handlers = {
"tcp": _make_tcp_handler(),
"tor": tor_provider.get_tor_handler(),
"i2p": i2p_provider.get_i2p_handler(),
}
log.msg(
format="built Foolscap connection handlers for: %(known_handlers)s",
known_handlers=sorted([k for k,v in handlers.items() if v]),
facility="tahoe.node",
umid="PuLh8g",
)
# then we remember the default mappings from tahoe.cfg
default_connection_handlers = {"tor": "tor", "i2p": "i2p"}
# Remember the default mappings from tahoe.cfg
default_connection_handlers = {
name: name
for name
in handlers
}
tcp_handler_name = config.get_config("connections", "tcp", "tcp").lower()
if tcp_handler_name == "disabled":
default_connection_handlers["tcp"] = None
@ -662,10 +664,35 @@ def create_connection_handlers(reactor, config, i2p_provider, tor_provider):
if not reveal_ip:
if default_connection_handlers.get("tcp") == "tcp":
raise PrivacyError("tcp = tcp, must be set to 'tor' or 'disabled'")
return default_connection_handlers, foolscap_connection_handlers
raise PrivacyError(
"Privacy requested with `reveal-IP-address = false` "
"but `tcp = tcp` conflicts with this.",
)
return default_connection_handlers
def create_connection_handlers(config, i2p_provider, tor_provider):
"""
:returns: 2-tuple of default_connection_handlers, foolscap_connection_handlers
"""
# We store handlers for everything. None means we were unable to
# create that handler, so hints which want it will be ignored.
handlers = {
"tcp": _make_tcp_handler(),
"tor": tor_provider.get_client_endpoint(),
"i2p": i2p_provider.get_client_endpoint(),
}
log.msg(
format="built Foolscap connection handlers for: %(known_handlers)s",
known_handlers=sorted([k for k,v in handlers.items() if v]),
facility="tahoe.node",
umid="PuLh8g",
)
return create_default_connection_handlers(
config,
handlers,
), handlers
def create_tub(tub_options, default_connection_handlers, foolscap_connection_handlers,
handler_overrides={}, **kwargs):
@ -705,8 +732,21 @@ def _convert_tub_port(s):
return us
def _tub_portlocation(config):
class PortAssignmentRequired(Exception):
"""
A Tub port number was configured to be 0 where this is not allowed.
"""
def _tub_portlocation(config, get_local_addresses_sync, allocate_tcp_port):
"""
Figure out the network location of the main tub for some configuration.
:param get_local_addresses_sync: A function like
``iputil.get_local_addresses_sync``.
:param allocate_tcp_port: A function like ``iputil.allocate_tcp_port``.
:returns: None or tuple of (port, location) for the main tub based
on the given configuration. May raise ValueError or PrivacyError
if there are problems with the config
@ -746,7 +786,7 @@ def _tub_portlocation(config):
file_tubport = fileutil.read(config.portnum_fname).strip()
tubport = _convert_tub_port(file_tubport)
else:
tubport = "tcp:%d" % iputil.allocate_tcp_port()
tubport = "tcp:%d" % (allocate_tcp_port(),)
fileutil.write_atomically(config.portnum_fname, tubport + "\n",
mode="")
else:
@ -754,7 +794,7 @@ def _tub_portlocation(config):
for port in tubport.split(","):
if port in ("0", "tcp:0"):
raise ValueError("tub.port cannot be 0: you must choose")
raise PortAssignmentRequired()
if cfg_location is None:
cfg_location = "AUTO"
@ -766,7 +806,7 @@ def _tub_portlocation(config):
if "AUTO" in split_location:
if not reveal_ip:
raise PrivacyError("tub.location uses AUTO")
local_addresses = iputil.get_local_addresses_sync()
local_addresses = get_local_addresses_sync()
# tubport must be like "tcp:12345" or "tcp:12345:morestuff"
local_portnum = int(tubport.split(":")[1])
new_locations = []
@ -797,6 +837,33 @@ def _tub_portlocation(config):
return tubport, location
def tub_listen_on(i2p_provider, tor_provider, tub, tubport, location):
"""
Assign a Tub its listener locations.
:param i2p_provider: See ``allmydata.util.i2p_provider.create``.
:param tor_provider: See ``allmydata.util.tor_provider.create``.
"""
for port in tubport.split(","):
if port == "listen:i2p":
# the I2P provider will read its section of tahoe.cfg and
# return either a fully-formed Endpoint, or a descriptor
# that will create one, so we don't have to stuff all the
# options into the tub.port string (which would need a lot
# of escaping)
port_or_endpoint = i2p_provider.get_listener()
elif port == "listen:tor":
port_or_endpoint = tor_provider.get_listener()
else:
port_or_endpoint = port
# Foolscap requires native strings:
if isinstance(port_or_endpoint, (bytes, str)):
port_or_endpoint = ensure_str(port_or_endpoint)
tub.listenOn(port_or_endpoint)
# This last step makes the Tub is ready for tub.registerReference()
tub.setLocation(location)
def create_main_tub(config, tub_options,
default_connection_handlers, foolscap_connection_handlers,
i2p_provider, tor_provider,
@ -821,36 +888,34 @@ def create_main_tub(config, tub_options,
:param tor_provider: None, or a _Provider instance if txtorcon +
Tor are installed.
"""
portlocation = _tub_portlocation(config)
portlocation = _tub_portlocation(
config,
iputil.get_local_addresses_sync,
iputil.allocate_tcp_port,
)
certfile = config.get_private_path("node.pem") # FIXME? "node.pem" was the CERTFILE option/thing
tub = create_tub(tub_options, default_connection_handlers, foolscap_connection_handlers,
handler_overrides=handler_overrides, certFile=certfile)
# FIXME? "node.pem" was the CERTFILE option/thing
certfile = config.get_private_path("node.pem")
if portlocation:
tubport, location = portlocation
for port in tubport.split(","):
if port == "listen:i2p":
# the I2P provider will read its section of tahoe.cfg and
# return either a fully-formed Endpoint, or a descriptor
# that will create one, so we don't have to stuff all the
# options into the tub.port string (which would need a lot
# of escaping)
port_or_endpoint = i2p_provider.get_listener()
elif port == "listen:tor":
port_or_endpoint = tor_provider.get_listener()
else:
port_or_endpoint = port
# Foolscap requires native strings:
if isinstance(port_or_endpoint, (bytes, str)):
port_or_endpoint = ensure_str(port_or_endpoint)
tub.listenOn(port_or_endpoint)
tub.setLocation(location)
log.msg("Tub location set to %s" % (location,))
# the Tub is now ready for tub.registerReference()
else:
tub = create_tub(
tub_options,
default_connection_handlers,
foolscap_connection_handlers,
handler_overrides=handler_overrides,
certFile=certfile,
)
if portlocation is None:
log.msg("Tub is not listening")
else:
tubport, location = portlocation
tub_listen_on(
i2p_provider,
tor_provider,
tub,
tubport,
location,
)
log.msg("Tub location set to %s" % (location,))
return tub
@ -872,7 +937,6 @@ class Node(service.MultiService):
"""
NODETYPE = "unknown NODETYPE"
CERTFILE = "node.pem"
GENERATED_FILES = []
def __init__(self, config, main_tub, control_tub, i2p_provider, tor_provider):
"""

View File

@ -1,3 +1,15 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
import weakref
from zope.interface import implementer
from allmydata.util.assertutil import precondition
@ -126,7 +138,7 @@ class NodeMaker(object):
def create_new_mutable_directory(self, initial_children={}, version=None):
# initial_children must have metadata (i.e. {} instead of None)
for (name, (node, metadata)) in initial_children.iteritems():
for (name, (node, metadata)) in initial_children.items():
precondition(isinstance(metadata, dict),
"create_new_mutable_directory requires metadata to be a dict, not None", metadata)
node.raise_error()

View File

@ -1,5 +1,10 @@
from __future__ import print_function
try:
from allmydata.scripts.types_ import SubCommands
except ImportError:
pass
from twisted.python import usage
from allmydata.scripts.common import BaseOptions
@ -79,8 +84,8 @@ def do_admin(options):
subCommands = [
["admin", None, AdminCommand, "admin subcommands: use 'tahoe admin' for a list"],
]
("admin", None, AdminCommand, "admin subcommands: use 'tahoe admin' for a list"),
] # type: SubCommands
dispatch = {
"admin": do_admin,

View File

@ -1,6 +1,12 @@
from __future__ import print_function
import os.path, re, fnmatch
try:
from allmydata.scripts.types_ import SubCommands, Parameters
except ImportError:
pass
from twisted.python import usage
from allmydata.scripts.common import get_aliases, get_default_nodedir, \
DEFAULT_ALIAS, BaseOptions
@ -19,7 +25,7 @@ class FileStoreOptions(BaseOptions):
"This overrides the URL found in the --node-directory ."],
["dir-cap", None, None,
"Specify which dirnode URI should be used as the 'tahoe' alias."]
]
] # type: Parameters
def postOptions(self):
self["quiet"] = self.parent["quiet"]
@ -455,25 +461,25 @@ class DeepCheckOptions(FileStoreOptions):
Optionally repair any problems found."""
subCommands = [
["mkdir", None, MakeDirectoryOptions, "Create a new directory."],
["add-alias", None, AddAliasOptions, "Add a new alias cap."],
["create-alias", None, CreateAliasOptions, "Create a new alias cap."],
["list-aliases", None, ListAliasesOptions, "List all alias caps."],
["ls", None, ListOptions, "List a directory."],
["get", None, GetOptions, "Retrieve a file from the grid."],
["put", None, PutOptions, "Upload a file into the grid."],
["cp", None, CpOptions, "Copy one or more files or directories."],
["unlink", None, UnlinkOptions, "Unlink a file or directory on the grid."],
["mv", None, MvOptions, "Move a file within the grid."],
["ln", None, LnOptions, "Make an additional link to an existing file or directory."],
["backup", None, BackupOptions, "Make target dir look like local dir."],
["webopen", None, WebopenOptions, "Open a web browser to a grid file or directory."],
["manifest", None, ManifestOptions, "List all files/directories in a subtree."],
["stats", None, StatsOptions, "Print statistics about all files/directories in a subtree."],
["check", None, CheckOptions, "Check a single file or directory."],
["deep-check", None, DeepCheckOptions, "Check all files/directories reachable from a starting point."],
["status", None, TahoeStatusCommand, "Various status information."],
]
("mkdir", None, MakeDirectoryOptions, "Create a new directory."),
("add-alias", None, AddAliasOptions, "Add a new alias cap."),
("create-alias", None, CreateAliasOptions, "Create a new alias cap."),
("list-aliases", None, ListAliasesOptions, "List all alias caps."),
("ls", None, ListOptions, "List a directory."),
("get", None, GetOptions, "Retrieve a file from the grid."),
("put", None, PutOptions, "Upload a file into the grid."),
("cp", None, CpOptions, "Copy one or more files or directories."),
("unlink", None, UnlinkOptions, "Unlink a file or directory on the grid."),
("mv", None, MvOptions, "Move a file within the grid."),
("ln", None, LnOptions, "Make an additional link to an existing file or directory."),
("backup", None, BackupOptions, "Make target dir look like local dir."),
("webopen", None, WebopenOptions, "Open a web browser to a grid file or directory."),
("manifest", None, ManifestOptions, "List all files/directories in a subtree."),
("stats", None, StatsOptions, "Print statistics about all files/directories in a subtree."),
("check", None, CheckOptions, "Check a single file or directory."),
("deep-check", None, DeepCheckOptions, "Check all files/directories reachable from a starting point."),
("status", None, TahoeStatusCommand, "Various status information."),
] # type: SubCommands
def mkdir(options):
from allmydata.scripts import tahoe_mkdir

View File

@ -4,6 +4,12 @@ import os, sys, urllib, textwrap
import codecs
from os.path import join
try:
from typing import Optional
from .types_ import Parameters
except ImportError:
pass
from yaml import (
safe_dump,
)
@ -37,12 +43,12 @@ class BaseOptions(usage.Options):
super(BaseOptions, self).__init__()
self.command_name = os.path.basename(sys.argv[0])
# Only allow "tahoe --version", not e.g. "tahoe start --version"
# Only allow "tahoe --version", not e.g. "tahoe <cmd> --version"
def opt_version(self):
raise usage.UsageError("--version not allowed on subcommands")
description = None
description_unwrapped = None
description = None # type: Optional[str]
description_unwrapped = None # type: Optional[str]
def __str__(self):
width = int(os.environ.get('COLUMNS', '80'))
@ -65,7 +71,7 @@ class BasedirOptions(BaseOptions):
optParameters = [
["basedir", "C", None, "Specify which Tahoe base directory should be used. [default: %s]"
% quote_local_unicode_path(_default_nodedir)],
]
] # type: Parameters
def parseArgs(self, basedir=None):
# This finds the node-directory option correctly even if we are in a subcommand.
@ -102,7 +108,7 @@ class NoDefaultBasedirOptions(BasedirOptions):
optParameters = [
["basedir", "C", None, "Specify which Tahoe base directory should be used."],
]
] # type: Parameters
# This is overridden in order to ensure we get a "Wrong number of arguments."
# error when more than one argument is given.

View File

@ -3,6 +3,11 @@ from __future__ import print_function
import os
import json
try:
from allmydata.scripts.types_ import SubCommands
except ImportError:
pass
from twisted.internet import reactor, defer
from twisted.python.usage import UsageError
from twisted.python.filepath import (
@ -492,10 +497,10 @@ def create_introducer(config):
subCommands = [
["create-node", None, CreateNodeOptions, "Create a node that acts as a client, server or both."],
["create-client", None, CreateClientOptions, "Create a client node (with storage initially disabled)."],
["create-introducer", None, CreateIntroducerOptions, "Create an introducer node."],
]
("create-node", None, CreateNodeOptions, "Create a node that acts as a client, server or both."),
("create-client", None, CreateClientOptions, "Create a client node (with storage initially disabled)."),
("create-introducer", None, CreateIntroducerOptions, "Create an introducer node."),
] # type: SubCommands
dispatch = {
"create-node": create_node,

View File

@ -1,5 +1,12 @@
from __future__ import print_function
try:
from allmydata.scripts.types_ import SubCommands
except ImportError:
pass
from future.utils import bchr
# do not import any allmydata modules at this level. Do that from inside
# individual functions instead.
import struct, time, os, sys
@ -905,7 +912,7 @@ def corrupt_share(options):
f = open(fn, "rb+")
f.seek(offset)
d = f.read(1)
d = chr(ord(d) ^ 0x01)
d = bchr(ord(d) ^ 0x01)
f.seek(offset)
f.write(d)
f.close()
@ -920,7 +927,7 @@ def corrupt_share(options):
f.seek(m.DATA_OFFSET)
data = f.read(2000)
# make sure this slot contains an SMDF share
assert data[0] == b"\x00", "non-SDMF mutable shares not supported"
assert data[0:1] == b"\x00", "non-SDMF mutable shares not supported"
f.close()
(version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
@ -1051,8 +1058,8 @@ def do_debug(options):
subCommands = [
["debug", None, DebugCommand, "debug subcommands: use 'tahoe debug' for a list."],
]
("debug", None, DebugCommand, "debug subcommands: use 'tahoe debug' for a list."),
] # type: SubCommands
dispatch = {
"debug": do_debug,

View File

@ -1,261 +0,0 @@
from __future__ import print_function
import os, sys
from allmydata.scripts.common import BasedirOptions
from twisted.scripts import twistd
from twisted.python import usage
from twisted.python.reflect import namedAny
from twisted.internet.defer import maybeDeferred, fail
from twisted.application.service import Service
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util import fileutil
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
from allmydata.util.configutil import UnknownConfigError
from allmydata.util.deferredutil import HookMixin
def get_pidfile(basedir):
"""
Returns the path to the PID file.
:param basedir: the node's base directory
:returns: the path to the PID file
"""
return os.path.join(basedir, u"twistd.pid")
def get_pid_from_pidfile(pidfile):
"""
Tries to read and return the PID stored in the node's PID file
(twistd.pid).
:param pidfile: try to read this PID file
:returns: A numeric PID on success, ``None`` if PID file absent or
inaccessible, ``-1`` if PID file invalid.
"""
try:
with open(pidfile, "r") as f:
pid = f.read()
except EnvironmentError:
return None
try:
pid = int(pid)
except ValueError:
return -1
return pid
def identify_node_type(basedir):
"""
:return unicode: None or one of: 'client', 'introducer', or
'key-generator'
"""
tac = u''
try:
for fn in listdir_unicode(basedir):
if fn.endswith(u".tac"):
tac = fn
break
except OSError:
return None
for t in (u"client", u"introducer", u"key-generator"):
if t in tac:
return t
return None
class RunOptions(BasedirOptions):
optParameters = [
("basedir", "C", None,
"Specify which Tahoe base directory should be used."
" This has the same effect as the global --node-directory option."
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
]
def parseArgs(self, basedir=None, *twistd_args):
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
# looks like an option to the tahoe subcommand, not to twistd. So you
# can either use 'tahoe start' or 'tahoe start NODEDIR
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
BasedirOptions.parseArgs(self, basedir)
self.twistd_args = twistd_args
def getSynopsis(self):
return ("Usage: %s [global-options] %s [options]"
" [NODEDIR [twistd-options]]"
% (self.command_name, self.subcommand_name))
def getUsage(self, width=None):
t = BasedirOptions.getUsage(self, width) + "\n"
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
t += twistd_options.replace("Options:", "twistd-options:", 1)
t += """
Note that if any twistd-options are used, NODEDIR must be specified explicitly
(not by default or using -C/--basedir or -d/--node-directory), and followed by
the twistd-options.
"""
return t
class MyTwistdConfig(twistd.ServerOptions):
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
stderr = sys.stderr
class DaemonizeTheRealService(Service, HookMixin):
"""
this HookMixin should really be a helper; our hooks:
- 'running': triggered when startup has completed; it triggers
with None of successful or a Failure otherwise.
"""
stderr = sys.stderr
def __init__(self, nodetype, basedir, options):
super(DaemonizeTheRealService, self).__init__()
self.nodetype = nodetype
self.basedir = basedir
# setup for HookMixin
self._hooks = {
"running": None,
}
self.stderr = options.parent.stderr
def startService(self):
def key_generator_removed():
return fail(ValueError("key-generator support removed, see #2783"))
def start():
node_to_instance = {
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
u"key-generator": key_generator_removed,
}
try:
service_factory = node_to_instance[self.nodetype]
except KeyError:
raise ValueError("unknown nodetype %s" % self.nodetype)
def handle_config_error(fail):
if fail.check(UnknownConfigError):
self.stderr.write("\nConfiguration error:\n{}\n\n".format(fail.value))
else:
self.stderr.write("\nUnknown error\n")
fail.printTraceback(self.stderr)
reactor.stop()
d = service_factory()
def created(srv):
srv.setServiceParent(self.parent)
d.addCallback(created)
d.addErrback(handle_config_error)
d.addBoth(self._call_hook, 'running')
return d
from twisted.internet import reactor
reactor.callWhenRunning(start)
class DaemonizeTahoeNodePlugin(object):
tapname = "tahoenode"
def __init__(self, nodetype, basedir):
self.nodetype = nodetype
self.basedir = basedir
def makeService(self, so):
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
def run(config):
"""
Runs a Tahoe-LAFS node in the foreground.
Sets up the IService instance corresponding to the type of node
that's starting and uses Twisted's twistd runner to disconnect our
process from the terminal.
"""
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
if not os.path.isdir(basedir):
print("%s does not look like a directory at all" % quoted_basedir, file=err)
return 1
nodetype = identify_node_type(basedir)
if not nodetype:
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
return 1
# Now prepare to turn into a twistd process. This os.chdir is the point
# of no return.
os.chdir(basedir)
twistd_args = []
if (nodetype in (u"client", u"introducer")
and "--nodaemon" not in config.twistd_args
and "--syslog" not in config.twistd_args
and "--logfile" not in config.twistd_args):
fileutil.make_dirs(os.path.join(basedir, u"logs"))
twistd_args.extend(["--logfile", os.path.join("logs", "twistd.log")])
twistd_args.extend(config.twistd_args)
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
twistd_config = MyTwistdConfig()
twistd_config.stdout = out
twistd_config.stderr = err
try:
twistd_config.parseOptions(twistd_args)
except usage.error as ue:
# these arguments were unsuitable for 'twistd'
print(config, file=err)
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
return 1
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
# handle invalid PID file (twistd might not start otherwise)
pidfile = get_pidfile(basedir)
if get_pid_from_pidfile(pidfile) == -1:
print("found invalid PID file in %s - deleting it" % basedir, file=err)
os.remove(pidfile)
# On Unix-like platforms:
# Unless --nodaemon was provided, the twistd.runApp() below spawns off a
# child process, and the parent calls os._exit(0), so there's no way for
# us to get control afterwards, even with 'except SystemExit'. If
# application setup fails (e.g. ImportError), runApp() will raise an
# exception.
#
# So if we wanted to do anything with the running child, we'd have two
# options:
#
# * fork first, and have our child wait for the runApp() child to get
# running. (note: just fork(). This is easier than fork+exec, since we
# don't have to get PATH and PYTHONPATH set up, since we're not
# starting a *different* process, just cloning a new instance of the
# current process)
# * or have the user run a separate command some time after this one
# exits.
#
# For Tahoe, we don't need to do anything with the child, so we can just
# let it exit.
#
# On Windows:
# twistd does not fork; it just runs in the current process whether or not
# --nodaemon is specified. (As on Unix, --nodaemon does have the side effect
# of causing us to log to stdout/stderr.)
if "--nodaemon" in twistd_args or sys.platform == "win32":
verb = "running"
else:
verb = "starting"
print("%s node in %s" % (verb, quoted_basedir), file=out)
twistd.runApp(twistd_config)
# we should only reach here if --nodaemon or equivalent was used
return 0

View File

@ -4,13 +4,17 @@ import os, sys
from six.moves import StringIO
import six
try:
from allmydata.scripts.types_ import SubCommands
except ImportError:
pass
from twisted.python import usage
from twisted.internet import defer, task, threads
from allmydata.scripts.common import get_default_nodedir
from allmydata.scripts import debug, create_node, cli, \
admin, tahoe_daemonize, tahoe_start, \
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
admin, tahoe_run, tahoe_invite
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
from allmydata.util.eliotutil import (
opt_eliot_destination,
@ -37,20 +41,12 @@ if _default_nodedir:
# XXX all this 'dispatch' stuff needs to be unified + fixed up
_control_node_dispatch = {
"daemonize": tahoe_daemonize.daemonize,
"start": tahoe_start.start,
"run": tahoe_run.run,
"stop": tahoe_stop.stop,
"restart": tahoe_restart.restart,
}
process_control_commands = [
["run", None, tahoe_run.RunOptions, "run a node without daemonizing"],
["daemonize", None, tahoe_daemonize.DaemonizeOptions, "(deprecated) run a node in the background"],
["start", None, tahoe_start.StartOptions, "(deprecated) start a node in the background and confirm it started"],
["stop", None, tahoe_stop.StopOptions, "(deprecated) stop a node"],
["restart", None, tahoe_restart.RestartOptions, "(deprecated) restart a node"],
]
("run", None, tahoe_run.RunOptions, "run a node without daemonizing"),
] # type: SubCommands
class Options(usage.Options):
@ -107,7 +103,7 @@ class Options(usage.Options):
create_dispatch = {}
for module in (create_node,):
create_dispatch.update(module.dispatch)
create_dispatch.update(module.dispatch) # type: ignore
def parse_options(argv, config=None):
if not config:

View File

@ -1,16 +0,0 @@
from .run_common import (
RunOptions as _RunOptions,
run,
)
__all__ = [
"DaemonizeOptions",
"daemonize",
]
class DaemonizeOptions(_RunOptions):
subcommand_name = "daemonize"
def daemonize(config):
print("'tahoe daemonize' is deprecated; see 'tahoe run'")
return run(config)

View File

@ -2,6 +2,11 @@ from __future__ import print_function
import json
try:
from allmydata.scripts.types_ import SubCommands
except ImportError:
pass
from twisted.python import usage
from twisted.internet import defer, reactor
@ -103,7 +108,7 @@ def invite(options):
subCommands = [
("invite", None, InviteOptions,
"Invite a new node to this grid"),
]
] # type: SubCommands
dispatch = {
"invite": invite,

View File

@ -1,21 +0,0 @@
from __future__ import print_function
from .tahoe_start import StartOptions, start
from .tahoe_stop import stop, COULD_NOT_STOP
class RestartOptions(StartOptions):
subcommand_name = "restart"
def restart(config):
print("'tahoe restart' is deprecated; see 'tahoe run'")
stderr = config.stderr
rc = stop(config)
if rc == COULD_NOT_STOP:
print("ignoring couldn't-stop", file=stderr)
rc = 0
if rc:
print("not restarting", file=stderr)
return rc
return start(config)

View File

@ -1,15 +1,233 @@
from .run_common import (
RunOptions as _RunOptions,
run,
)
from __future__ import print_function
__all__ = [
"RunOptions",
"run",
]
class RunOptions(_RunOptions):
import os, sys
from allmydata.scripts.common import BasedirOptions
from twisted.scripts import twistd
from twisted.python import usage
from twisted.python.reflect import namedAny
from twisted.internet.defer import maybeDeferred
from twisted.application.service import Service
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
from allmydata.util.configutil import UnknownConfigError
from allmydata.util.deferredutil import HookMixin
from allmydata.node import (
PortAssignmentRequired,
PrivacyError,
)
def get_pidfile(basedir):
"""
Returns the path to the PID file.
:param basedir: the node's base directory
:returns: the path to the PID file
"""
return os.path.join(basedir, u"twistd.pid")
def get_pid_from_pidfile(pidfile):
"""
Tries to read and return the PID stored in the node's PID file
(twistd.pid).
:param pidfile: try to read this PID file
:returns: A numeric PID on success, ``None`` if PID file absent or
inaccessible, ``-1`` if PID file invalid.
"""
try:
with open(pidfile, "r") as f:
pid = f.read()
except EnvironmentError:
return None
try:
pid = int(pid)
except ValueError:
return -1
return pid
def identify_node_type(basedir):
"""
:return unicode: None or one of: 'client' or 'introducer'.
"""
tac = u''
try:
for fn in listdir_unicode(basedir):
if fn.endswith(u".tac"):
tac = fn
break
except OSError:
return None
for t in (u"client", u"introducer"):
if t in tac:
return t
return None
class RunOptions(BasedirOptions):
subcommand_name = "run"
def postOptions(self):
self.twistd_args += ("--nodaemon",)
optParameters = [
("basedir", "C", None,
"Specify which Tahoe base directory should be used."
" This has the same effect as the global --node-directory option."
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
]
def parseArgs(self, basedir=None, *twistd_args):
# This can't handle e.g. 'tahoe run --reactor=foo', since
# '--reactor=foo' looks like an option to the tahoe subcommand, not to
# twistd. So you can either use 'tahoe run' or 'tahoe run NODEDIR
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR run
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
BasedirOptions.parseArgs(self, basedir)
self.twistd_args = twistd_args
def getSynopsis(self):
return ("Usage: %s [global-options] %s [options]"
" [NODEDIR [twistd-options]]"
% (self.command_name, self.subcommand_name))
def getUsage(self, width=None):
t = BasedirOptions.getUsage(self, width) + "\n"
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
t += twistd_options.replace("Options:", "twistd-options:", 1)
t += """
Note that if any twistd-options are used, NODEDIR must be specified explicitly
(not by default or using -C/--basedir or -d/--node-directory), and followed by
the twistd-options.
"""
return t
class MyTwistdConfig(twistd.ServerOptions):
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
stderr = sys.stderr
class DaemonizeTheRealService(Service, HookMixin):
"""
this HookMixin should really be a helper; our hooks:
- 'running': triggered when startup has completed; it triggers
with None of successful or a Failure otherwise.
"""
stderr = sys.stderr
def __init__(self, nodetype, basedir, options):
super(DaemonizeTheRealService, self).__init__()
self.nodetype = nodetype
self.basedir = basedir
# setup for HookMixin
self._hooks = {
"running": None,
}
self.stderr = options.parent.stderr
def startService(self):
def start():
node_to_instance = {
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
}
try:
service_factory = node_to_instance[self.nodetype]
except KeyError:
raise ValueError("unknown nodetype %s" % self.nodetype)
def handle_config_error(reason):
if reason.check(UnknownConfigError):
self.stderr.write("\nConfiguration error:\n{}\n\n".format(reason.value))
elif reason.check(PortAssignmentRequired):
self.stderr.write("\ntub.port cannot be 0: you must choose.\n\n")
elif reason.check(PrivacyError):
self.stderr.write("\n{}\n\n".format(reason.value))
else:
self.stderr.write("\nUnknown error\n")
reason.printTraceback(self.stderr)
reactor.stop()
d = service_factory()
def created(srv):
srv.setServiceParent(self.parent)
d.addCallback(created)
d.addErrback(handle_config_error)
d.addBoth(self._call_hook, 'running')
return d
from twisted.internet import reactor
reactor.callWhenRunning(start)
class DaemonizeTahoeNodePlugin(object):
tapname = "tahoenode"
def __init__(self, nodetype, basedir):
self.nodetype = nodetype
self.basedir = basedir
def makeService(self, so):
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
def run(config):
"""
Runs a Tahoe-LAFS node in the foreground.
Sets up the IService instance corresponding to the type of node
that's starting and uses Twisted's twistd runner to disconnect our
process from the terminal.
"""
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
if not os.path.isdir(basedir):
print("%s does not look like a directory at all" % quoted_basedir, file=err)
return 1
nodetype = identify_node_type(basedir)
if not nodetype:
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
return 1
# Now prepare to turn into a twistd process. This os.chdir is the point
# of no return.
os.chdir(basedir)
twistd_args = ["--nodaemon"]
twistd_args.extend(config.twistd_args)
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
twistd_config = MyTwistdConfig()
twistd_config.stdout = out
twistd_config.stderr = err
try:
twistd_config.parseOptions(twistd_args)
except usage.error as ue:
# these arguments were unsuitable for 'twistd'
print(config, file=err)
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
return 1
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
# handle invalid PID file (twistd might not start otherwise)
pidfile = get_pidfile(basedir)
if get_pid_from_pidfile(pidfile) == -1:
print("found invalid PID file in %s - deleting it" % basedir, file=err)
os.remove(pidfile)
# We always pass --nodaemon so twistd.runApp does not daemonize.
print("running node in %s" % (quoted_basedir,), file=out)
twistd.runApp(twistd_config)
return 0

View File

@ -1,152 +0,0 @@
from __future__ import print_function
import os
import io
import sys
import time
import subprocess
from os.path import join, exists
from allmydata.scripts.common import BasedirOptions
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util.encodingutil import quote_local_unicode_path
from .run_common import MyTwistdConfig, identify_node_type
class StartOptions(BasedirOptions):
subcommand_name = "start"
optParameters = [
("basedir", "C", None,
"Specify which Tahoe base directory should be used."
" This has the same effect as the global --node-directory option."
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
]
def parseArgs(self, basedir=None, *twistd_args):
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
# looks like an option to the tahoe subcommand, not to twistd. So you
# can either use 'tahoe start' or 'tahoe start NODEDIR
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
BasedirOptions.parseArgs(self, basedir)
self.twistd_args = twistd_args
def getSynopsis(self):
return ("Usage: %s [global-options] %s [options]"
" [NODEDIR [twistd-options]]"
% (self.command_name, self.subcommand_name))
def getUsage(self, width=None):
t = BasedirOptions.getUsage(self, width) + "\n"
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
t += twistd_options.replace("Options:", "twistd-options:", 1)
t += """
Note that if any twistd-options are used, NODEDIR must be specified explicitly
(not by default or using -C/--basedir or -d/--node-directory), and followed by
the twistd-options.
"""
return t
def start(config):
"""
Start a tahoe node (daemonize it and confirm startup)
We run 'tahoe daemonize' with all the options given to 'tahoe
start' and then watch the log files for the correct text to appear
(e.g. "introducer started"). If that doesn't happen within a few
seconds, an error is printed along with all collected logs.
"""
print("'tahoe start' is deprecated; see 'tahoe run'")
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("STARTING", quoted_basedir, file=out)
if not os.path.isdir(basedir):
print("%s does not look like a directory at all" % quoted_basedir, file=err)
return 1
nodetype = identify_node_type(basedir)
if not nodetype:
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
return 1
# "tahoe start" attempts to monitor the logs for successful
# startup -- but we can't always do that.
can_monitor_logs = False
if (nodetype in (u"client", u"introducer")
and "--nodaemon" not in config.twistd_args
and "--syslog" not in config.twistd_args
and "--logfile" not in config.twistd_args):
can_monitor_logs = True
if "--help" in config.twistd_args:
return 0
if not can_monitor_logs:
print("Custom logging options; can't monitor logs for proper startup messages", file=out)
return 1
# before we spawn tahoe, we check if "the log file" exists or not,
# and if so remember how big it is -- essentially, we're doing
# "tail -f" to see what "this" incarnation of "tahoe daemonize"
# spews forth.
starting_offset = 0
log_fname = join(basedir, 'logs', 'twistd.log')
if exists(log_fname):
with open(log_fname, 'r') as f:
f.seek(0, 2)
starting_offset = f.tell()
# spawn tahoe. Note that since this daemonizes, it should return
# "pretty fast" and with a zero return-code, or else something
# Very Bad has happened.
try:
args = [sys.executable] if not getattr(sys, 'frozen', False) else []
for i, arg in enumerate(sys.argv):
if arg in ['start', 'restart']:
args.append('daemonize')
else:
args.append(arg)
subprocess.check_call(args)
except subprocess.CalledProcessError as e:
return e.returncode
# now, we have to determine if tahoe has actually started up
# successfully or not. so, we start sucking up log files and
# looking for "the magic string", which depends on the node type.
magic_string = u'{} running'.format(nodetype)
with io.open(log_fname, 'r') as f:
f.seek(starting_offset)
collected = u''
overall_start = time.time()
while time.time() - overall_start < 60:
this_start = time.time()
while time.time() - this_start < 5:
collected += f.read()
if magic_string in collected:
if not config.parent['quiet']:
print("Node has started successfully", file=out)
return 0
if 'Traceback ' in collected:
print("Error starting node; see '{}' for more:\n\n{}".format(
log_fname,
collected,
), file=err)
return 1
time.sleep(0.1)
print("Still waiting up to {}s for node startup".format(
60 - int(time.time() - overall_start)
), file=out)
print("Something has gone wrong starting the node.", file=out)
print("Logs are available in '{}'".format(log_fname), file=out)
print("Collected for this run:", file=out)
print(collected, file=out)
return 1

View File

@ -1,85 +0,0 @@
from __future__ import print_function
import os
import time
import signal
from allmydata.scripts.common import BasedirOptions
from allmydata.util.encodingutil import quote_local_unicode_path
from .run_common import get_pidfile, get_pid_from_pidfile
COULD_NOT_STOP = 2
class StopOptions(BasedirOptions):
def parseArgs(self, basedir=None):
BasedirOptions.parseArgs(self, basedir)
def getSynopsis(self):
return ("Usage: %s [global-options] stop [options] [NODEDIR]"
% (self.command_name,))
def stop(config):
print("'tahoe stop' is deprecated; see 'tahoe run'")
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("STOPPING", quoted_basedir, file=out)
pidfile = get_pidfile(basedir)
pid = get_pid_from_pidfile(pidfile)
if pid is None:
print("%s does not look like a running node directory (no twistd.pid)" % quoted_basedir, file=err)
# we define rc=2 to mean "nothing is running, but it wasn't me who
# stopped it"
return COULD_NOT_STOP
elif pid == -1:
print("%s contains an invalid PID file" % basedir, file=err)
# we define rc=2 to mean "nothing is running, but it wasn't me who
# stopped it"
return COULD_NOT_STOP
# kill it hard (SIGKILL), delete the twistd.pid file, then wait for the
# process itself to go away. If it hasn't gone away after 20 seconds, warn
# the user but keep waiting until they give up.
try:
os.kill(pid, signal.SIGKILL)
except OSError as oserr:
if oserr.errno == 3:
print(oserr.strerror)
# the process didn't exist, so wipe the pid file
os.remove(pidfile)
return COULD_NOT_STOP
else:
raise
try:
os.remove(pidfile)
except EnvironmentError:
pass
start = time.time()
time.sleep(0.1)
wait = 40
first_time = True
while True:
# poll once per second until we see the process is no longer running
try:
os.kill(pid, 0)
except OSError:
print("process %d is dead" % pid, file=out)
return
wait -= 1
if wait < 0:
if first_time:
print("It looks like pid %d is still running "
"after %d seconds" % (pid,
(time.time() - start)), file=err)
print("I will keep watching it until you interrupt me.", file=err)
wait = 10
first_time = False
else:
print("pid %d still running after %d seconds" % \
(pid, (time.time() - start)), file=err)
wait = 10
time.sleep(1)
# control never reaches here: no timeout

View File

@ -0,0 +1,12 @@
from typing import List, Tuple, Type, Sequence, Any
from allmydata.scripts.common import BaseOptions
# Historically, subcommands were implemented as lists, but due to a
# [designed contraint in mypy](https://stackoverflow.com/a/52559625/70170),
# a Tuple is required.
SubCommand = Tuple[str, None, Type[BaseOptions], str]
SubCommands = List[SubCommand]
Parameters = List[Sequence[Any]]

View File

@ -1,11 +1,16 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import time
# Python 2 compatibility
from future.utils import PY2
if PY2:
from future.builtins import str # noqa: F401
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
import time
from twisted.application import service
from twisted.application.internet import TimerService
@ -18,7 +23,7 @@ from allmydata.interfaces import IStatsProducer
@implementer(IStatsProducer)
class CPUUsageMonitor(service.MultiService):
HISTORY_LENGTH = 15
POLL_INTERVAL = 60
POLL_INTERVAL = 60 # type: float
def __init__(self):
service.MultiService.__init__(self)

View File

@ -19,7 +19,7 @@ import os, time, struct
try:
import cPickle as pickle
except ImportError:
import pickle
import pickle # type: ignore
from twisted.internet import reactor
from twisted.application import service
from allmydata.storage.common import si_b2a

View File

@ -202,7 +202,7 @@ class ShareFile(object):
@implementer(RIBucketWriter)
class BucketWriter(Referenceable):
class BucketWriter(Referenceable): # type: ignore # warner/foolscap#78
def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
self.ss = ss
@ -301,7 +301,7 @@ class BucketWriter(Referenceable):
@implementer(RIBucketReader)
class BucketReader(Referenceable):
class BucketReader(Referenceable): # type: ignore # warner/foolscap#78
def __init__(self, ss, sharefname, storage_index=None, shnum=None):
self.ss = ss

View File

@ -581,7 +581,7 @@ class StorageServer(service.MultiService, Referenceable):
for share in six.viewvalues(shares):
share.add_or_renew_lease(lease_info)
def slot_testv_and_readv_and_writev(
def slot_testv_and_readv_and_writev( # type: ignore # warner/foolscap#78
self,
storage_index,
secrets,

Some files were not shown because too many files have changed in this diff Show More