mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-01-18 18:56:28 +00:00
Merge branch 'master' into 3399.mypy
This commit is contained in:
commit
574613a892
14
.codecov.yml
14
.codecov.yml
@ -32,3 +32,17 @@ coverage:
|
||||
patch:
|
||||
default:
|
||||
threshold: 1%
|
||||
|
||||
|
||||
codecov:
|
||||
# This is a public repository so supposedly we don't "need" to use an upload
|
||||
# token. However, using one makes sure that CI jobs running against forked
|
||||
# repositories have coverage uploaded to the right place in codecov so
|
||||
# their reports aren't incomplete.
|
||||
token: "abf679b6-e2e6-4b33-b7b5-6cfbd41ee691"
|
||||
|
||||
notify:
|
||||
# The reference documentation suggests that this is the default setting:
|
||||
# https://docs.codecov.io/docs/codecovyml-reference#codecovnotifywait_for_ci
|
||||
# However observation suggests otherwise.
|
||||
wait_for_ci: true
|
||||
|
@ -1,9 +1,10 @@
|
||||
repos:
|
||||
- repo: local
|
||||
- repo: "local"
|
||||
hooks:
|
||||
- id: codechecks
|
||||
name: codechecks
|
||||
- id: "codechecks"
|
||||
name: "codechecks"
|
||||
stages: ["push"]
|
||||
language: "system"
|
||||
files: ".py$"
|
||||
entry: "tox -e codechecks"
|
||||
language: system
|
||||
pass_filenames: false
|
||||
pass_filenames: true
|
||||
|
4
CREDITS
4
CREDITS
@ -203,3 +203,7 @@ N: meejah
|
||||
E: meejah@meejah.ca
|
||||
P: 0xC2602803128069A7, 9D5A 2BD5 688E CB88 9DEB CD3F C260 2803 1280 69A7
|
||||
D: various bug-fixes and features
|
||||
|
||||
N: Viktoriia Savchuk
|
||||
W: https://twitter.com/viktoriiasvchk
|
||||
D: Developer community focused improvements on the README file.
|
16
Makefile
16
Makefile
@ -13,8 +13,6 @@ MAKEFLAGS += --warn-undefined-variables
|
||||
MAKEFLAGS += --no-builtin-rules
|
||||
|
||||
# Local target variables
|
||||
VCS_HOOK_SAMPLES=$(wildcard .git/hooks/*.sample)
|
||||
VCS_HOOKS=$(VCS_HOOK_SAMPLES:%.sample=%)
|
||||
PYTHON=python
|
||||
export PYTHON
|
||||
PYFLAKES=flake8
|
||||
@ -31,15 +29,6 @@ TEST_SUITE=allmydata
|
||||
default:
|
||||
@echo "no default target"
|
||||
|
||||
.PHONY: install-vcs-hooks
|
||||
## Install the VCS hooks to run linters on commit and all tests on push
|
||||
install-vcs-hooks: .git/hooks/pre-commit .git/hooks/pre-push
|
||||
.PHONY: uninstall-vcs-hooks
|
||||
## Remove the VCS hooks
|
||||
uninstall-vcs-hooks: .tox/create-venvs.log
|
||||
"./$(dir $(<))py36/bin/pre-commit" uninstall || true
|
||||
"./$(dir $(<))py36/bin/pre-commit" uninstall -t pre-push || true
|
||||
|
||||
.PHONY: test
|
||||
## Run all tests and code reports
|
||||
test: .tox/create-venvs.log
|
||||
@ -215,7 +204,7 @@ clean:
|
||||
rm -f *.pkg
|
||||
|
||||
.PHONY: distclean
|
||||
distclean: clean uninstall-vcs-hooks
|
||||
distclean: clean
|
||||
rm -rf src/*.egg-info
|
||||
rm -f src/allmydata/_version.py
|
||||
rm -f src/allmydata/_appname.py
|
||||
@ -261,6 +250,3 @@ src/allmydata/_version.py:
|
||||
|
||||
.tox/create-venvs.log: tox.ini setup.py
|
||||
tox --notest -p all | tee -a "$(@)"
|
||||
|
||||
$(VCS_HOOKS): .tox/create-venvs.log .pre-commit-config.yaml
|
||||
"./$(dir $(<))py36/bin/pre-commit" install --hook-type $(@:.git/hooks/%=%)
|
||||
|
152
README.rst
152
README.rst
@ -1,97 +1,119 @@
|
||||
==========
|
||||
Tahoe-LAFS
|
||||
==========
|
||||
======================================
|
||||
Free and Open decentralized data store
|
||||
======================================
|
||||
|
||||
Tahoe-LAFS is a Free and Open decentralized cloud storage system. It
|
||||
distributes your data across multiple servers. Even if some of the servers
|
||||
fail or are taken over by an attacker, the entire file store continues to
|
||||
function correctly, preserving your privacy and security.
|
||||
|image0|
|
||||
|
||||
For full documentation, please see
|
||||
http://tahoe-lafs.readthedocs.io/en/latest/ .
|
||||
`Tahoe-LAFS <https://www.tahoe-lafs.org>`__ (Tahoe Least-Authority File Store) is the first free software / open-source storage technology that distributes your data across multiple servers. Even if some servers fail or are taken over by an attacker, the entire file store continues to function correctly, preserving your privacy and security.
|
||||
|
||||
|Contributor Covenant| |readthedocs| |travis| |circleci| |codecov|
|
||||
|
||||
|
||||
INSTALLING
|
||||
==========
|
||||
Table of contents
|
||||
|
||||
There are three ways to install Tahoe-LAFS.
|
||||
- `About Tahoe-LAFS <#about-tahoe-lafs>`__
|
||||
|
||||
using OS packages
|
||||
^^^^^^^^^^^^^^^^^
|
||||
- `Installation <#installation>`__
|
||||
|
||||
Pre-packaged versions are available for several operating systems:
|
||||
- `Issues <#issues>`__
|
||||
|
||||
* Debian and Ubuntu users can ``apt-get install tahoe-lafs``
|
||||
* NixOS, NetBSD (pkgsrc), ArchLinux, Slackware, and Gentoo have packages
|
||||
available, see `OSPackages`_ for details
|
||||
* `Mac`_ and Windows installers are in development.
|
||||
- `Documentation <#documentation>`__
|
||||
|
||||
via pip
|
||||
^^^^^^^
|
||||
- `Community <#community>`__
|
||||
|
||||
If you don't use an OS package, you'll need Python 2.7 and `pip`_. You may
|
||||
also need a C compiler, and the development headers for python, libffi, and
|
||||
OpenSSL. On a Debian-like system, use ``apt-get install build-essential
|
||||
python-dev libffi-dev libssl-dev python-virtualenv``. On Windows, see
|
||||
`<docs/windows.rst>`_.
|
||||
- `Contributing <#contributing>`__
|
||||
|
||||
Then, to install the most recent release, just run:
|
||||
- `FAQ <#faq>`__
|
||||
|
||||
* ``pip install tahoe-lafs``
|
||||
- `License <#license>`__
|
||||
|
||||
from source
|
||||
^^^^^^^^^^^
|
||||
To install from source (either so you can hack on it, or just to run
|
||||
pre-release code), you should create a virtualenv and install into that:
|
||||
💡 About Tahoe-LAFS
|
||||
-------------------
|
||||
|
||||
* ``git clone https://github.com/tahoe-lafs/tahoe-lafs.git``
|
||||
* ``cd tahoe-lafs``
|
||||
* ``virtualenv --python=python2.7 venv``
|
||||
* ``venv/bin/pip install --upgrade setuptools``
|
||||
* ``venv/bin/pip install --editable .``
|
||||
* ``venv/bin/tahoe --version``
|
||||
Tahoe-LAFS helps you to store files while granting confidentiality, integrity, and availability of your data.
|
||||
|
||||
To run the unit test suite:
|
||||
How does it work? You run a client program on your computer, which talks to one or more storage servers on other computers. When you tell your client to store a file, it will encrypt that file, encode it into multiple pieces, then spread those pieces out among various servers. The pieces are all encrypted and protected against modifications. Later, when you ask your client to retrieve the file, it will find the necessary pieces, make sure they haven’t been corrupted, reassemble them, and decrypt the result.
|
||||
|
||||
* ``tox``
|
||||
| |image2|
|
||||
| *The image is taken from meejah's* \ `blog <https://blog.torproject.org/tor-heart-tahoe-lafs>`__ \ *post at Torproject.org.*
|
||||
|
||||
You can pass arguments to ``trial`` with an environment variable. For
|
||||
example, you can run the test suite on multiple cores to speed it up:
|
||||
|
|
||||
|
||||
* ``TAHOE_LAFS_TRIAL_ARGS="-j4" tox``
|
||||
The client creates pieces (“shares”) that have a configurable amount of redundancy, so even if some servers fail, you can still get your data back. Corrupt shares are detected and ignored so that the system can tolerate server-side hard-drive errors. All files are encrypted (with a unique key) before uploading, so even a malicious server operator cannot read your data. The only thing you ask of the servers is that they can (usually) provide the shares when you ask for them: you aren’t relying upon them for confidentiality, integrity, or absolute availability.
|
||||
|
||||
For more detailed instructions, read `<docs/INSTALL.rst>`_ .
|
||||
Tahoe-LAFS was first designed in 2007, following the "principle of least authority", a security best practice requiring system components to only have the privilege necessary to complete their intended function and not more.
|
||||
|
||||
Once ``tahoe --version`` works, see `<docs/running.rst>`_ to learn how to set
|
||||
up your first Tahoe-LAFS node.
|
||||
Please read more about Tahoe-LAFS architecture `here <docs/architecture.rst>`__.
|
||||
|
||||
LICENCE
|
||||
=======
|
||||
✅ Installation
|
||||
---------------
|
||||
|
||||
Copyright 2006-2018 The Tahoe-LAFS Software Foundation
|
||||
For more detailed instructions, read `docs/INSTALL.rst <docs/INSTALL.rst>`__ .
|
||||
|
||||
You may use this package under the GNU General Public License, version 2 or,
|
||||
at your option, any later version. You may use this package under the
|
||||
Transitive Grace Period Public Licence, version 1.0, or at your option, any
|
||||
later version. (You may choose to use this package under the terms of either
|
||||
licence, at your option.) See the file `COPYING.GPL`_ for the terms of the
|
||||
GNU General Public License, version 2. See the file `COPYING.TGPPL`_ for
|
||||
the terms of the Transitive Grace Period Public Licence, version 1.0.
|
||||
- `Building Tahoe-LAFS on Windows <docs/windows.rst>`__
|
||||
|
||||
See `TGPPL.PDF`_ for why the TGPPL exists, graphically illustrated on three
|
||||
slides.
|
||||
- `OS-X Packaging <docs/OS-X.rst>`__
|
||||
|
||||
.. _OSPackages: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OSPackages
|
||||
.. _Mac: docs/OS-X.rst
|
||||
.. _pip: https://pip.pypa.io/en/stable/installing/
|
||||
.. _COPYING.GPL: https://github.com/tahoe-lafs/tahoe-lafs/blob/master/COPYING.GPL
|
||||
.. _COPYING.TGPPL: https://github.com/tahoe-lafs/tahoe-lafs/blob/master/COPYING.TGPPL.rst
|
||||
.. _TGPPL.PDF: https://tahoe-lafs.org/~zooko/tgppl.pdf
|
||||
Once tahoe --version works, see `docs/running.rst <docs/running.rst>`__ to learn how to set up your first Tahoe-LAFS node.
|
||||
|
||||
----
|
||||
|
||||
🤖 Issues
|
||||
---------
|
||||
|
||||
Tahoe-LAFS uses the Trac instance to track `issues <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/ViewTickets>`__. Please email jean-paul plus tahoe-lafs at leastauthority dot com for an account.
|
||||
|
||||
📑 Documentation
|
||||
----------------
|
||||
|
||||
You can find the full Tahoe-LAFS documentation at our `documentation site <http://tahoe-lafs.readthedocs.io/en/latest/>`__.
|
||||
|
||||
💬 Community
|
||||
------------
|
||||
|
||||
Get involved with the Tahoe-LAFS community:
|
||||
|
||||
- Chat with Tahoe-LAFS developers at #tahoe-lafs chat on irc.freenode.net or `Slack <https://join.slack.com/t/tahoe-lafs/shared_invite/zt-jqfj12r5-ZZ5z3RvHnubKVADpP~JINQ>`__.
|
||||
|
||||
- Join our `weekly conference calls <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/WeeklyMeeting>`__ with core developers and interested community members.
|
||||
|
||||
- Subscribe to `the tahoe-dev mailing list <https://www.tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev>`__, the community forum for discussion of Tahoe-LAFS design, implementation, and usage.
|
||||
|
||||
🤗 Contributing
|
||||
---------------
|
||||
|
||||
As a community-driven open source project, Tahoe-LAFS welcomes contributions of any form:
|
||||
|
||||
- `Code patches <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Patches>`__
|
||||
|
||||
- `Documentation improvements <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Doc>`__
|
||||
|
||||
- `Bug reports <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/HowToReportABug>`__
|
||||
|
||||
- `Patch reviews <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/PatchReviewProcess>`__
|
||||
|
||||
Before authoring or reviewing a patch, please familiarize yourself with the `Coding Standard <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/CodingStandards>`__ and the `Contributor Code of Conduct <docs/CODE_OF_CONDUCT.md>`__.
|
||||
|
||||
|
||||
❓ FAQ
|
||||
------
|
||||
|
||||
Need more information? Please check our `FAQ page <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ>`__.
|
||||
|
||||
📄 License
|
||||
----------
|
||||
|
||||
Copyright 2006-2020 The Tahoe-LAFS Software Foundation
|
||||
|
||||
You may use this package under the GNU General Public License, version 2 or, at your option, any later version. You may use this package under the Transitive Grace Period Public Licence, version 1.0, or at your choice, any later version. (You may choose to use this package under the terms of either license, at your option.) See the file `COPYING.GPL <COPYING.GPL>`__ for the terms of the GNU General Public License, version 2. See the file `COPYING.TGPPL <COPYING.TGPPL.rst>`__ for the terms of the Transitive Grace Period Public Licence, version 1.0.
|
||||
|
||||
See `TGPPL.PDF <https://tahoe-lafs.org/~zooko/tgppl.pdf>`__ for why the TGPPL exists, graphically illustrated on three slides.
|
||||
|
||||
.. |image0| image:: docs/_static/media/image2.png
|
||||
:width: 3in
|
||||
:height: 0.91667in
|
||||
.. |image2| image:: docs/_static/media/image1.png
|
||||
:width: 6.9252in
|
||||
:height: 2.73611in
|
||||
.. |readthedocs| image:: http://readthedocs.org/projects/tahoe-lafs/badge/?version=latest
|
||||
:alt: documentation status
|
||||
:target: http://tahoe-lafs.readthedocs.io/en/latest/?badge=latest
|
||||
|
BIN
docs/_static/media/image1.png
vendored
Normal file
BIN
docs/_static/media/image1.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 111 KiB |
BIN
docs/_static/media/image2.png
vendored
Normal file
BIN
docs/_static/media/image2.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 4.3 KiB |
@ -273,7 +273,7 @@ Then, do the following:
|
||||
[connections]
|
||||
tcp = tor
|
||||
|
||||
* Launch the Tahoe server with ``tahoe start $NODEDIR``
|
||||
* Launch the Tahoe server with ``tahoe run $NODEDIR``
|
||||
|
||||
The ``tub.port`` section will cause the Tahoe server to listen on PORT, but
|
||||
bind the listening socket to the loopback interface, which is not reachable
|
||||
@ -435,4 +435,3 @@ It is therefore important that your I2P router is sharing bandwidth with other
|
||||
routers, so that you can give back as you use I2P. This will never impair the
|
||||
performance of your Tahoe-LAFS node, because your I2P router will always
|
||||
prioritize your own traffic.
|
||||
|
||||
|
@ -75,7 +75,7 @@ The item descriptions below use the following types:
|
||||
Node Types
|
||||
==========
|
||||
|
||||
A node can be a client/server, an introducer, or a statistics gatherer.
|
||||
A node can be a client/server or an introducer.
|
||||
|
||||
Client/server nodes provide one or more of the following services:
|
||||
|
||||
@ -365,7 +365,7 @@ set the ``tub.location`` option described below.
|
||||
also generally reduced when operating in private mode.
|
||||
|
||||
When False, any of the following configuration problems will cause
|
||||
``tahoe start`` to throw a PrivacyError instead of starting the node:
|
||||
``tahoe run`` to throw a PrivacyError instead of starting the node:
|
||||
|
||||
* ``[node] tub.location`` contains any ``tcp:`` hints
|
||||
|
||||
@ -398,13 +398,13 @@ This section controls *when* Tor and I2P are used. The ``[tor]`` and
|
||||
``[i2p]`` sections (described later) control *how* Tor/I2P connections are
|
||||
managed.
|
||||
|
||||
All Tahoe nodes need to make a connection to the Introducer; the ``[client]
|
||||
introducer.furl`` setting (described below) indicates where the Introducer
|
||||
lives. Tahoe client nodes must also make connections to storage servers:
|
||||
these targets are specified in announcements that come from the Introducer.
|
||||
Both are expressed as FURLs (a Foolscap URL), which include a list of
|
||||
"connection hints". Each connection hint describes one (of perhaps many)
|
||||
network endpoints where the service might live.
|
||||
All Tahoe nodes need to make a connection to the Introducer; the
|
||||
``private/introducers.yaml`` file (described below) configures where one or more
|
||||
Introducers live. Tahoe client nodes must also make connections to storage
|
||||
servers: these targets are specified in announcements that come from the
|
||||
Introducer. Both are expressed as FURLs (a Foolscap URL), which include a
|
||||
list of "connection hints". Each connection hint describes one (of perhaps
|
||||
many) network endpoints where the service might live.
|
||||
|
||||
Connection hints include a type, and look like:
|
||||
|
||||
@ -580,6 +580,8 @@ Client Configuration
|
||||
|
||||
``introducer.furl = (FURL string, mandatory)``
|
||||
|
||||
DEPRECATED. See :ref:`introducer-definitions`.
|
||||
|
||||
This FURL tells the client how to connect to the introducer. Each
|
||||
Tahoe-LAFS grid is defined by an introducer. The introducer's FURL is
|
||||
created by the introducer node and written into its private base
|
||||
@ -591,11 +593,6 @@ Client Configuration
|
||||
If provided, the node will attempt to connect to and use the given helper
|
||||
for uploads. See :doc:`helper` for details.
|
||||
|
||||
``stats_gatherer.furl = (FURL string, optional)``
|
||||
|
||||
If provided, the node will connect to the given stats gatherer and
|
||||
provide it with operational statistics.
|
||||
|
||||
``shares.needed = (int, optional) aka "k", default 3``
|
||||
|
||||
``shares.total = (int, optional) aka "N", N >= k, default 10``
|
||||
@ -909,11 +906,6 @@ This section describes these other files.
|
||||
This file is used to construct an introducer, and is created by the
|
||||
"``tahoe create-introducer``" command.
|
||||
|
||||
``tahoe-stats-gatherer.tac``
|
||||
|
||||
This file is used to construct a statistics gatherer, and is created by the
|
||||
"``tahoe create-stats-gatherer``" command.
|
||||
|
||||
``private/control.furl``
|
||||
|
||||
This file contains a FURL that provides access to a control port on the
|
||||
@ -965,29 +957,28 @@ This section describes these other files.
|
||||
with as many people as possible, put the empty string (so that
|
||||
``private/convergence`` is a zero-length file).
|
||||
|
||||
Additional Introducer Definitions
|
||||
=================================
|
||||
.. _introducer-definitions:
|
||||
|
||||
The ``private/introducers.yaml`` file defines additional Introducers. The
|
||||
first introducer is defined in ``tahoe.cfg``, in ``[client]
|
||||
introducer.furl``. To use two or more Introducers, choose a locally-unique
|
||||
"petname" for each one, then define their FURLs in
|
||||
``private/introducers.yaml`` like this::
|
||||
Introducer Definitions
|
||||
======================
|
||||
|
||||
The ``private/introducers.yaml`` file defines Introducers.
|
||||
Choose a locally-unique "petname" for each one then define their FURLs in ``private/introducers.yaml`` like this::
|
||||
|
||||
introducers:
|
||||
petname2:
|
||||
furl: FURL2
|
||||
furl: "FURL2"
|
||||
petname3:
|
||||
furl: FURL3
|
||||
furl: "FURL3"
|
||||
|
||||
Servers will announce themselves to all configured introducers. Clients will
|
||||
merge the announcements they receive from all introducers. Nothing will
|
||||
re-broadcast an announcement (i.e. telling introducer 2 about something you
|
||||
heard from introducer 1).
|
||||
|
||||
If you omit the introducer definitions from both ``tahoe.cfg`` and
|
||||
``introducers.yaml``, the node will not use an Introducer at all. Such
|
||||
"introducerless" clients must be configured with static servers (described
|
||||
If you omit the introducer definitions from ``introducers.yaml``,
|
||||
the node will not use an Introducer at all.
|
||||
Such "introducerless" clients must be configured with static servers (described
|
||||
below), or they will not be able to upload and download files.
|
||||
|
||||
Static Server Definitions
|
||||
@ -1152,7 +1143,6 @@ a legal one.
|
||||
timeout.disconnect = 1800
|
||||
|
||||
[client]
|
||||
introducer.furl = pb://ok45ssoklj4y7eok5c3xkmj@tcp:tahoe.example:44801/ii3uumo
|
||||
helper.furl = pb://ggti5ssoklj4y7eok5c3xkmj@tcp:helper.tahoe.example:7054/kk8lhr
|
||||
|
||||
[storage]
|
||||
@ -1163,6 +1153,11 @@ a legal one.
|
||||
[helper]
|
||||
enabled = True
|
||||
|
||||
To be introduced to storage servers, here is a sample ``private/introducers.yaml`` which can be used in conjunction::
|
||||
|
||||
introducers:
|
||||
examplegrid:
|
||||
furl: "pb://ok45ssoklj4y7eok5c3xkmj@tcp:tahoe.example:44801/ii3uumo"
|
||||
|
||||
Old Configuration Files
|
||||
=======================
|
||||
|
@ -5,23 +5,17 @@ Developer Guide
|
||||
Pre-commit Checks
|
||||
-----------------
|
||||
|
||||
This project is configured for use with `pre-commit`_ to install `VCS/git hooks`_ which
|
||||
perform some static code analysis checks and other code checks to catch common errors
|
||||
before each commit and to run the full self-test suite to find less obvious regressions
|
||||
before each push to a remote.
|
||||
This project is configured for use with `pre-commit`_ to install `VCS/git hooks`_ which perform some static code analysis checks and other code checks to catch common errors.
|
||||
These hooks can be configured to run before commits or pushes
|
||||
|
||||
For example::
|
||||
|
||||
tahoe-lafs $ make install-vcs-hooks
|
||||
...
|
||||
+ ./.tox//py36/bin/pre-commit install --hook-type pre-commit
|
||||
pre-commit installed at .git/hooks/pre-commit
|
||||
+ ./.tox//py36/bin/pre-commit install --hook-type pre-push
|
||||
tahoe-lafs $ pre-commit install --hook-type pre-push
|
||||
pre-commit installed at .git/hooks/pre-push
|
||||
tahoe-lafs $ python -c "import pathlib; pathlib.Path('src/allmydata/tabbed.py').write_text('def foo():\\n\\tpass\\n')"
|
||||
tahoe-lafs $ git add src/allmydata/tabbed.py
|
||||
tahoe-lafs $ echo "undefined" > src/allmydata/undefined_name.py
|
||||
tahoe-lafs $ git add src/allmydata/undefined_name.py
|
||||
tahoe-lafs $ git commit -a -m "Add a file that violates flake8"
|
||||
...
|
||||
tahoe-lafs $ git push
|
||||
codechecks...............................................................Failed
|
||||
- hook id: codechecks
|
||||
- exit code: 1
|
||||
@ -30,58 +24,17 @@ For example::
|
||||
codechecks inst-nodeps: ...
|
||||
codechecks installed: ...
|
||||
codechecks run-test-pre: PYTHONHASHSEED='...'
|
||||
codechecks run-test: commands[0] | flake8 src static misc setup.py
|
||||
src/allmydata/tabbed.py:2:1: W191 indentation contains tabs
|
||||
ERROR: InvocationError for command ./tahoe-lafs/.tox/codechecks/bin/flake8 src static misc setup.py (exited with code 1)
|
||||
codechecks run-test: commands[0] | flake8 src/allmydata/undefined_name.py
|
||||
src/allmydata/undefined_name.py:1:1: F821 undefined name 'undefined'
|
||||
ERROR: InvocationError for command ./tahoe-lafs/.tox/codechecks/bin/flake8 src/allmydata/undefined_name.py (exited with code 1)
|
||||
___________________________________ summary ____________________________________
|
||||
ERROR: codechecks: commands failed
|
||||
...
|
||||
|
||||
To uninstall::
|
||||
|
||||
tahoe-lafs $ make uninstall-vcs-hooks
|
||||
...
|
||||
+ ./.tox/py36/bin/pre-commit uninstall
|
||||
pre-commit uninstalled
|
||||
+ ./.tox/py36/bin/pre-commit uninstall -t pre-push
|
||||
tahoe-lafs $ pre-commit uninstall --hook-type pre-push
|
||||
pre-push uninstalled
|
||||
|
||||
Note that running the full self-test suite takes several minutes so expect pushing to
|
||||
take some time. If you can't or don't want to wait for the hooks in some cases, use the
|
||||
``--no-verify`` option to ``$ git commit ...`` or ``$ git push ...``. Alternatively,
|
||||
see the `pre-commit`_ documentation and CLI help output and use the committed
|
||||
`pre-commit configuration`_ as a starting point to write a local, uncommitted
|
||||
``../.pre-commit-config.local.yaml`` configuration to use instead. For example::
|
||||
|
||||
tahoe-lafs $ ./.tox/py36/bin/pre-commit --help
|
||||
tahoe-lafs $ ./.tox/py36/bin/pre-commit instll --help
|
||||
tahoe-lafs $ cp "./.pre-commit-config.yaml" "./.pre-commit-config.local.yaml"
|
||||
tahoe-lafs $ editor "./.pre-commit-config.local.yaml"
|
||||
...
|
||||
tahoe-lafs $ ./.tox/py36/bin/pre-commit install -c "./.pre-commit-config.local.yaml" -t pre-push
|
||||
pre-commit installed at .git/hooks/pre-push
|
||||
tahoe-lafs $ git commit -a -m "Add a file that violates flake8"
|
||||
[3398.pre-commit 29f8f43d2] Add a file that violates flake8
|
||||
1 file changed, 2 insertions(+)
|
||||
create mode 100644 src/allmydata/tabbed.py
|
||||
tahoe-lafs $ git push
|
||||
...
|
||||
codechecks...............................................................Failed
|
||||
- hook id: codechecks
|
||||
- exit code: 1
|
||||
|
||||
GLOB sdist-make: ./tahoe-lafs/setup.py
|
||||
codechecks inst-nodeps: ...
|
||||
codechecks installed: ...
|
||||
codechecks run-test-pre: PYTHONHASHSEED='...'
|
||||
codechecks run-test: commands[0] | flake8 src static misc setup.py
|
||||
src/allmydata/tabbed.py:2:1: W191 indentation contains tabs
|
||||
ERROR: InvocationError for command ./tahoe-lafs/.tox/codechecks/bin/flake8 src static misc setup.py (exited with code 1)
|
||||
___________________________________ summary ____________________________________
|
||||
ERROR: codechecks: commands failed
|
||||
...
|
||||
|
||||
error: failed to push some refs to 'github.com:jaraco/tahoe-lafs.git'
|
||||
|
||||
|
||||
.. _`pre-commit`: https://pre-commit.com
|
||||
|
@ -85,7 +85,7 @@ Node Management
|
||||
|
||||
"``tahoe create-node [NODEDIR]``" is the basic make-a-new-node
|
||||
command. It creates a new directory and populates it with files that
|
||||
will allow the "``tahoe start``" and related commands to use it later
|
||||
will allow the "``tahoe run``" and related commands to use it later
|
||||
on. ``tahoe create-node`` creates nodes that have client functionality
|
||||
(upload/download files), web API services (controlled by the
|
||||
'[node]web.port' configuration), and storage services (unless
|
||||
@ -94,8 +94,7 @@ on. ``tahoe create-node`` creates nodes that have client functionality
|
||||
NODEDIR defaults to ``~/.tahoe/`` , and newly-created nodes default to
|
||||
publishing a web server on port 3456 (limited to the loopback interface, at
|
||||
127.0.0.1, to restrict access to other programs on the same host). All of the
|
||||
other "``tahoe``" subcommands use corresponding defaults (with the exception
|
||||
that "``tahoe run``" defaults to running a node in the current directory).
|
||||
other "``tahoe``" subcommands use corresponding defaults.
|
||||
|
||||
"``tahoe create-client [NODEDIR]``" creates a node with no storage service.
|
||||
That is, it behaves like "``tahoe create-node --no-storage [NODEDIR]``".
|
||||
@ -117,25 +116,6 @@ the same way on all platforms and logs to stdout. If you want to run
|
||||
the process as a daemon, it is recommended that you use your favourite
|
||||
daemonization tool.
|
||||
|
||||
The now-deprecated "``tahoe start [NODEDIR]``" command will launch a
|
||||
previously-created node. It will launch the node into the background
|
||||
using ``tahoe daemonize`` (and internal-only command, not for user
|
||||
use). On some platforms (including Windows) this command is unable to
|
||||
run a daemon in the background; in that case it behaves in the same
|
||||
way as "``tahoe run``". ``tahoe start`` also monitors the logs for up
|
||||
to 5 seconds looking for either a succesful startup message or for
|
||||
early failure messages and produces an appropriate exit code. You are
|
||||
encouraged to use ``tahoe run`` along with your favourite
|
||||
daemonization tool instead of this. ``tahoe start`` is maintained for
|
||||
backwards compatibility of users already using it; new scripts should
|
||||
depend on ``tahoe run``.
|
||||
|
||||
"``tahoe stop [NODEDIR]``" will shut down a running node. "``tahoe
|
||||
restart [NODEDIR]``" will stop and then restart a running
|
||||
node. Similar to above, you should use ``tahoe run`` instead alongside
|
||||
your favourite daemonization tool.
|
||||
|
||||
|
||||
File Store Manipulation
|
||||
=======================
|
||||
|
||||
|
@ -2145,7 +2145,7 @@ you could do the following::
|
||||
tahoe debug dump-cap URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
|
||||
-> storage index: whpepioyrnff7orecjolvbudeu
|
||||
echo "whpepioyrnff7orecjolvbudeu my puppy told me to" >>$NODEDIR/access.blacklist
|
||||
tahoe restart $NODEDIR
|
||||
# ... restart the node to re-read configuration ...
|
||||
tahoe get URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
|
||||
-> error, 403 Access Prohibited: my puppy told me to
|
||||
|
||||
|
@ -20,10 +20,10 @@ Config setting File Comment
|
||||
``[node]log_gatherer.furl`` ``BASEDIR/log_gatherer.furl`` (one per line)
|
||||
``[node]timeout.keepalive`` ``BASEDIR/keepalive_timeout``
|
||||
``[node]timeout.disconnect`` ``BASEDIR/disconnect_timeout``
|
||||
``[client]introducer.furl`` ``BASEDIR/introducer.furl``
|
||||
``BASEDIR/introducer.furl`` ``BASEDIR/private/introducers.yaml``
|
||||
``[client]helper.furl`` ``BASEDIR/helper.furl``
|
||||
``[client]key_generator.furl`` ``BASEDIR/key_generator.furl``
|
||||
``[client]stats_gatherer.furl`` ``BASEDIR/stats_gatherer.furl``
|
||||
``BASEDIR/stats_gatherer.furl`` Stats gatherer has been removed.
|
||||
``[storage]enabled`` ``BASEDIR/no_storage`` (``False`` if ``no_storage`` exists)
|
||||
``[storage]readonly`` ``BASEDIR/readonly_storage`` (``True`` if ``readonly_storage`` exists)
|
||||
``[storage]sizelimit`` ``BASEDIR/sizelimit``
|
||||
@ -47,3 +47,10 @@ the now (since Tahoe-LAFS v1.3.0) unsupported
|
||||
addresses specified in ``advertised_ip_addresses`` were used in
|
||||
addition to any that were automatically discovered), whereas the new
|
||||
``tahoe.cfg`` directive is not (``tub.location`` is used verbatim).
|
||||
|
||||
The stats gatherer has been broken at least since Tahoe-LAFS v1.13.0.
|
||||
The (broken) functionality of ``[client]stats_gatherer.furl`` (which
|
||||
was previously in ``BASEDIR/stats_gatherer.furl``), is scheduled to be
|
||||
completely removed after Tahoe-LAFS v1.15.0. After that point, if
|
||||
your configuration contains a ``[client]stats_gatherer.furl``, your
|
||||
node will refuse to start.
|
||||
|
@ -128,10 +128,9 @@ provided in ``misc/incident-gatherer/support_classifiers.py`` . There is
|
||||
roughly one category for each ``log.WEIRD``-or-higher level event in the
|
||||
Tahoe source code.
|
||||
|
||||
The incident gatherer is created with the "``flogtool
|
||||
create-incident-gatherer WORKDIR``" command, and started with "``tahoe
|
||||
start``". The generated "``gatherer.tac``" file should be modified to add
|
||||
classifier functions.
|
||||
The incident gatherer is created with the "``flogtool create-incident-gatherer
|
||||
WORKDIR``" command, and started with "``tahoe run``". The generated
|
||||
"``gatherer.tac``" file should be modified to add classifier functions.
|
||||
|
||||
The incident gatherer writes incident names (which are simply the relative
|
||||
pathname of the ``incident-\*.flog.bz2`` file) into ``classified/CATEGORY``.
|
||||
@ -175,7 +174,7 @@ things that happened on multiple machines (such as comparing a client node
|
||||
making a request with the storage servers that respond to that request).
|
||||
|
||||
Create the Log Gatherer with the "``flogtool create-gatherer WORKDIR``"
|
||||
command, and start it with "``tahoe start``". Then copy the contents of the
|
||||
command, and start it with "``twistd -ny gatherer.tac``". Then copy the contents of the
|
||||
``log_gatherer.furl`` file it creates into the ``BASEDIR/tahoe.cfg`` file
|
||||
(under the key ``log_gatherer.furl`` of the section ``[node]``) of all nodes
|
||||
that should be sending it log events. (See :doc:`configuration`)
|
||||
|
@ -45,9 +45,6 @@ Create a client node (with storage initially disabled).
|
||||
.TP
|
||||
.B \f[B]create-introducer\f[]
|
||||
Create an introducer node.
|
||||
.TP
|
||||
.B \f[B]create-stats-gatherer\f[]
|
||||
Create a stats-gatherer service.
|
||||
.SS OPTIONS
|
||||
.TP
|
||||
.B \f[B]-C,\ --basedir=\f[]
|
||||
|
@ -65,9 +65,9 @@ Running a Client
|
||||
To construct a client node, run “``tahoe create-client``”, which will create
|
||||
``~/.tahoe`` to be the node's base directory. Acquire the ``introducer.furl``
|
||||
(see below if you are running your own introducer, or use the one from the
|
||||
`TestGrid page`_), and paste it after ``introducer.furl =`` in the
|
||||
``[client]`` section of ``~/.tahoe/tahoe.cfg``. Then use “``tahoe run
|
||||
~/.tahoe``”. After that, the node should be off and running. The first thing
|
||||
`TestGrid page`_), and write it to ``~/.tahoe/private/introducers.yaml``
|
||||
(see :ref:`introducer-definitions`). Then use “``tahoe run ~/.tahoe``”.
|
||||
After that, the node should be off and running. The first thing
|
||||
it will do is connect to the introducer and get itself connected to all other
|
||||
nodes on the grid.
|
||||
|
||||
@ -81,9 +81,7 @@ does not offer its disk space to other nodes. To configure other behavior,
|
||||
use “``tahoe create-node``” or see :doc:`configuration`.
|
||||
|
||||
The “``tahoe run``” command above will run the node in the foreground.
|
||||
On Unix, you can run it in the background instead by using the
|
||||
“``tahoe start``” command. To stop a node started in this way, use
|
||||
“``tahoe stop``”. ``tahoe --help`` gives a summary of all commands.
|
||||
``tahoe --help`` gives a summary of all commands.
|
||||
|
||||
|
||||
Running a Server or Introducer
|
||||
@ -99,12 +97,10 @@ and ``--location`` arguments.
|
||||
To construct an introducer, create a new base directory for it (the name
|
||||
of the directory is up to you), ``cd`` into it, and run “``tahoe
|
||||
create-introducer --hostname=example.net .``” (but using the hostname of
|
||||
your VPS). Now run the introducer using “``tahoe start .``”. After it
|
||||
your VPS). Now run the introducer using “``tahoe run .``”. After it
|
||||
starts, it will write a file named ``introducer.furl`` into the
|
||||
``private/`` subdirectory of that base directory. This file contains the
|
||||
URL the other nodes must use in order to connect to this introducer.
|
||||
(Note that “``tahoe run .``” doesn't work for introducers, this is a
|
||||
known issue: `#937`_.)
|
||||
|
||||
You can distribute your Introducer fURL securely to new clients by using
|
||||
the ``tahoe invite`` command. This will prepare some JSON to send to the
|
||||
|
@ -8,6 +8,7 @@ the data formats used by Tahoe.
|
||||
:maxdepth: 2
|
||||
|
||||
outline
|
||||
url
|
||||
uri
|
||||
file-encoding
|
||||
URI-extension
|
||||
|
165
docs/specifications/url.rst
Normal file
165
docs/specifications/url.rst
Normal file
@ -0,0 +1,165 @@
|
||||
URLs
|
||||
====
|
||||
|
||||
The goal of this document is to completely specify the construction and use of the URLs by Tahoe-LAFS for service location.
|
||||
This includes, but is not limited to, the original Foolscap-based URLs.
|
||||
These are not to be confused with the URI-like capabilities Tahoe-LAFS uses to refer to stored data.
|
||||
An attempt is also made to outline the rationale for certain choices about these URLs.
|
||||
The intended audience for this document is Tahoe-LAFS maintainers and other developers interested in interoperating with Tahoe-LAFS or these URLs.
|
||||
|
||||
Background
|
||||
----------
|
||||
|
||||
Tahoe-LAFS first used Foolscap_ for network communication.
|
||||
Foolscap connection setup takes as an input a Foolscap URL or a *fURL*.
|
||||
A fURL includes three components:
|
||||
|
||||
* the base32-encoded SHA1 hash of the DER form of an x509v3 certificate
|
||||
* zero or more network addresses [1]_
|
||||
* an object identifier
|
||||
|
||||
A Foolscap client tries to connect to each network address in turn.
|
||||
If a connection is established then TLS is negotiated.
|
||||
The server is authenticated by matching its certificate against the hash in the fURL.
|
||||
A matching certificate serves as proof that the handshaking peer is the correct server.
|
||||
This serves as the process by which the client authenticates the server.
|
||||
|
||||
The client can then exercise further Foolscap functionality using the fURL's object identifier.
|
||||
If the object identifier is an unguessable, secret string then it serves as a capability.
|
||||
This unguessable identifier is sometimes called a `swiss number`_ (or swissnum).
|
||||
The client's use of the swissnum is what allows the server to authorize the client.
|
||||
|
||||
.. _`swiss number`: http://wiki.erights.org/wiki/Swiss_number
|
||||
|
||||
NURLs
|
||||
-----
|
||||
|
||||
The authentication and authorization properties of fURLs are a good fit for Tahoe-LAFS' requirements.
|
||||
These are not inherently tied to the Foolscap protocol itself.
|
||||
In particular they are beneficial to :doc:`../proposed/http-storage-node-protocol` which uses HTTP instead of Foolscap.
|
||||
It is conceivable they will also be used with WebSockets at some point as well.
|
||||
|
||||
Continuing to refer to these URLs as fURLs when they are being used for other protocols may cause confusion.
|
||||
Therefore,
|
||||
this document coins the name **NURL** for these URLs.
|
||||
This can be considered to expand to "**N**\ ew URLs" or "Authe\ **N**\ ticating URLs" or "Authorizi\ **N**\ g URLs" as the reader prefers.
|
||||
|
||||
The anticipated use for a **NURL** will still be to establish a TLS connection to a peer.
|
||||
The protocol run over that TLS connection could be Foolscap though it is more likely to be an HTTP-based protocol (such as GBS).
|
||||
|
||||
Syntax
|
||||
------
|
||||
|
||||
The EBNF for a NURL is as follows::
|
||||
|
||||
nurl = scheme, hash, "@", net-loc-list, "/", swiss-number, [ version1 ]
|
||||
|
||||
scheme = "pb://"
|
||||
|
||||
hash = unreserved
|
||||
|
||||
net-loc-list = net-loc, [ { ",", net-loc } ]
|
||||
net-loc = tcp-loc | tor-loc | i2p-loc
|
||||
|
||||
tcp-loc = [ "tcp:" ], hostname, [ ":" port ]
|
||||
tor-loc = "tor:", hostname, [ ":" port ]
|
||||
i2p-loc = "i2p:", i2p-addr, [ ":" port ]
|
||||
|
||||
i2p-addr = { unreserved }, ".i2p"
|
||||
hostname = domain | IPv4address | IPv6address
|
||||
|
||||
swiss-number = segment
|
||||
|
||||
version1 = "#v=1"
|
||||
|
||||
See https://tools.ietf.org/html/rfc3986#section-3.3 for the definition of ``segment``.
|
||||
See https://tools.ietf.org/html/rfc2396#appendix-A for the definition of ``unreserved``.
|
||||
See https://tools.ietf.org/html/draft-main-ipaddr-text-rep-02#section-3.1 for the definition of ``IPv4address``.
|
||||
See https://tools.ietf.org/html/draft-main-ipaddr-text-rep-02#section-3.2 for the definition of ``IPv6address``.
|
||||
See https://tools.ietf.org/html/rfc1035#section-2.3.1 for the definition of ``domain``.
|
||||
|
||||
Versions
|
||||
--------
|
||||
|
||||
Though all NURLs are syntactically compatible some semantic differences are allowed.
|
||||
These differences are separated into distinct versions.
|
||||
|
||||
Version 0
|
||||
---------
|
||||
|
||||
A Foolscap fURL is considered the canonical definition of a version 0 NURL.
|
||||
Notably,
|
||||
the hash component is defined as the base32-encoded SHA1 hash of the DER form of an x509v3 certificate.
|
||||
A version 0 NURL is identified by the absence of the ``v=1`` fragment.
|
||||
|
||||
Examples
|
||||
~~~~~~~~
|
||||
|
||||
* ``pb://sisi4zenj7cxncgvdog7szg3yxbrnamy@tcp:127.1:34399/xphmwz6lx24rh2nxlinni``
|
||||
* ``pb://2uxmzoqqimpdwowxr24q6w5ekmxcymby@localhost:47877/riqhpojvzwxujhna5szkn``
|
||||
|
||||
Version 1
|
||||
---------
|
||||
|
||||
The hash component of a version 1 NURL differs in three ways from the prior version.
|
||||
|
||||
1. The hash function used is SHA3-224 instead of SHA1.
|
||||
The security of SHA1 `continues to be eroded`_.
|
||||
Contrariwise SHA3 is currently the most recent addition to the SHA family by NIST.
|
||||
The 224 bit instance is chosen to keep the output short and because it offers greater collision resistance than SHA1 was thought to offer even at its inception
|
||||
(prior to security research showing actual collision resistance is lower).
|
||||
2. The hash is computed over the certificate's SPKI instead of the whole certificate.
|
||||
This allows certificate re-generation so long as the public key remains the same.
|
||||
This is useful to allow contact information to be updated or extension of validity period.
|
||||
Use of an SPKI hash has also been `explored by the web community`_ during its flirtation with using it for HTTPS certificate pinning
|
||||
(though this is now largely abandoned).
|
||||
|
||||
.. note::
|
||||
*Only* the certificate's keypair is pinned by the SPKI hash.
|
||||
The freedom to change every other part of the certificate is coupled with the fact that all other parts of the certificate contain arbitrary information set by the private key holder.
|
||||
It is neither guaranteed nor expected that a certificate-issuing authority has validated this information.
|
||||
Therefore,
|
||||
*all* certificate fields should be considered within the context of the relationship identified by the SPKI hash.
|
||||
|
||||
3. The hash is encoded using urlsafe-base64 (without padding) instead of base32.
|
||||
This provides a more compact representation and minimizes the usability impacts of switching from a 160 bit hash to a 224 bit hash.
|
||||
|
||||
A version 1 NURL is identified by the presence of the ``v=1`` fragment.
|
||||
Though the length of the hash string (38 bytes) could also be used to differentiate it from a version 0 NURL,
|
||||
there is no guarantee that this will be effective in differentiating it from future versions so this approach should not be used.
|
||||
|
||||
It is possible for a client to unilaterally upgrade a version 0 NURL to a version 1 NURL.
|
||||
After establishing and authenticating a connection the client will have received a copy of the server's certificate.
|
||||
This is sufficient to compute the new hash and rewrite the NURL to upgrade it to version 1.
|
||||
This provides stronger authentication assurances for future uses but it is not required.
|
||||
|
||||
Examples
|
||||
~~~~~~~~
|
||||
|
||||
* ``pb://1WUX44xKjKdpGLohmFcBNuIRN-8rlv1Iij_7rQ@tcp:127.1:34399/jhjbc3bjbhk#v=1``
|
||||
* ``pb://azEu8vlRpnEeYm0DySQDeNY3Z2iJXHC_bsbaAw@localhost:47877/64i4aokv4ej#v=1``
|
||||
|
||||
.. _`continues to be eroded`: https://en.wikipedia.org/wiki/SHA-1#Cryptanalysis_and_validation
|
||||
.. _`explored by the web community`: https://www.imperialviolet.org/2011/05/04/pinning.html
|
||||
.. _Foolscap: https://github.com/warner/foolscap
|
||||
|
||||
.. [1] ``foolscap.furl.decode_furl`` is taken as the canonical definition of the syntax of a fURL.
|
||||
The **location hints** part of the fURL,
|
||||
as it is referred to in Foolscap,
|
||||
is matched by the regular expression fragment ``([^/]*)``.
|
||||
Since this matches the empty string,
|
||||
no network addresses are required to form a fURL.
|
||||
The supporting code around the regular expression also takes extra steps to allow an empty string to match here.
|
||||
|
||||
Open Questions
|
||||
--------------
|
||||
|
||||
1. Should we make a hard recommendation that all certificate fields are ignored?
|
||||
The system makes no guarantees about validation of these fields.
|
||||
Is it just an unnecessary risk to let a user see them?
|
||||
|
||||
2. Should the version specifier be a query-arg-alike or a fragment-alike?
|
||||
The value is only necessary on the client side which makes it similar to an HTTP URL fragment.
|
||||
The current Tahoe-LAFS configuration parsing code has special handling of the fragment character (``#``) which makes it unusable.
|
||||
However,
|
||||
the configuration parsing code is easily changed.
|
@ -6,8 +6,7 @@ Tahoe Statistics
|
||||
|
||||
1. `Overview`_
|
||||
2. `Statistics Categories`_
|
||||
3. `Running a Tahoe Stats-Gatherer Service`_
|
||||
4. `Using Munin To Graph Stats Values`_
|
||||
3. `Using Munin To Graph Stats Values`_
|
||||
|
||||
Overview
|
||||
========
|
||||
@ -243,92 +242,6 @@ The currently available stats (as of release 1.6.0 or so) are described here:
|
||||
the process was started. Ticket #472 indicates that .total may
|
||||
sometimes be negative due to wraparound of the kernel's counter.
|
||||
|
||||
**stats.load_monitor.\***
|
||||
|
||||
When enabled, the "load monitor" continually schedules a one-second
|
||||
callback, and measures how late the response is. This estimates system load
|
||||
(if the system is idle, the response should be on time). This is only
|
||||
enabled if a stats-gatherer is configured.
|
||||
|
||||
avg_load
|
||||
average "load" value (seconds late) over the last minute
|
||||
|
||||
max_load
|
||||
maximum "load" value over the last minute
|
||||
|
||||
|
||||
Running a Tahoe Stats-Gatherer Service
|
||||
======================================
|
||||
|
||||
The "stats-gatherer" is a simple daemon that periodically collects stats from
|
||||
several tahoe nodes. It could be useful, e.g., in a production environment,
|
||||
where you want to monitor dozens of storage servers from a central management
|
||||
host. It merely gatherers statistics from many nodes into a single place: it
|
||||
does not do any actual analysis.
|
||||
|
||||
The stats gatherer listens on a network port using the same Foolscap_
|
||||
connection library that Tahoe clients use to connect to storage servers.
|
||||
Tahoe nodes can be configured to connect to the stats gatherer and publish
|
||||
their stats on a periodic basis. (In fact, what happens is that nodes connect
|
||||
to the gatherer and offer it a second FURL which points back to the node's
|
||||
"stats port", which the gatherer then uses to pull stats on a periodic basis.
|
||||
The initial connection is flipped to allow the nodes to live behind NAT
|
||||
boxes, as long as the stats-gatherer has a reachable IP address.)
|
||||
|
||||
.. _Foolscap: https://foolscap.lothar.com/trac
|
||||
|
||||
The stats-gatherer is created in the same fashion as regular tahoe client
|
||||
nodes and introducer nodes. Choose a base directory for the gatherer to live
|
||||
in (but do not create the directory). Choose the hostname that should be
|
||||
advertised in the gatherer's FURL. Then run:
|
||||
|
||||
::
|
||||
|
||||
tahoe create-stats-gatherer --hostname=HOSTNAME $BASEDIR
|
||||
|
||||
and start it with "tahoe start $BASEDIR". Once running, the gatherer will
|
||||
write a FURL into $BASEDIR/stats_gatherer.furl .
|
||||
|
||||
To configure a Tahoe client/server node to contact the stats gatherer, copy
|
||||
this FURL into the node's tahoe.cfg file, in a section named "[client]",
|
||||
under a key named "stats_gatherer.furl", like so:
|
||||
|
||||
::
|
||||
|
||||
[client]
|
||||
stats_gatherer.furl = pb://qbo4ktl667zmtiuou6lwbjryli2brv6t@HOSTNAME:PORTNUM/wxycb4kaexzskubjnauxeoptympyf45y
|
||||
|
||||
or simply copy the stats_gatherer.furl file into the node's base directory
|
||||
(next to the tahoe.cfg file): it will be interpreted in the same way.
|
||||
|
||||
When the gatherer is created, it will allocate a random unused TCP port, so
|
||||
it should not conflict with anything else that you have running on that host
|
||||
at that time. To explicitly control which port it uses, run the creation
|
||||
command with ``--location=`` and ``--port=`` instead of ``--hostname=``. If
|
||||
you use a hostname of ``example.org`` and a port number of ``1234``, then
|
||||
run::
|
||||
|
||||
tahoe create-stats-gatherer --location=tcp:example.org:1234 --port=tcp:1234
|
||||
|
||||
``--location=`` is a Foolscap FURL hints string (so it can be a
|
||||
comma-separated list of connection hints), and ``--port=`` is a Twisted
|
||||
"server endpoint specification string", as described in :doc:`configuration`.
|
||||
|
||||
Once running, the stats gatherer will create a standard JSON file in
|
||||
``$BASEDIR/stats.json``. Once a minute, the gatherer will pull stats
|
||||
information from every connected node and write them into the file. The file
|
||||
will contain a dictionary, in which node identifiers (known as "tubid"
|
||||
strings) are the keys, and the values are a dict with 'timestamp',
|
||||
'nickname', and 'stats' keys. d[tubid][stats] will contain the stats
|
||||
dictionary as made available at http://localhost:3456/statistics?t=json . The
|
||||
file will only contain the most recent update from each node.
|
||||
|
||||
Other tools can be built to examine these stats and render them into
|
||||
something useful. For example, a tool could sum the
|
||||
"storage_server.disk_avail' values from all servers to compute a
|
||||
total-disk-available number for the entire grid (however, the "disk watcher"
|
||||
daemon, in misc/operations_helpers/spacetime/, is better suited for this
|
||||
specific task).
|
||||
|
||||
Using Munin To Graph Stats Values
|
||||
=================================
|
||||
|
@ -201,9 +201,8 @@ log_gatherer.furl = {log_furl}
|
||||
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
|
||||
f.write(config)
|
||||
|
||||
# on windows, "tahoe start" means: run forever in the foreground,
|
||||
# but on linux it means daemonize. "tahoe run" is consistent
|
||||
# between platforms.
|
||||
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
|
||||
# "start" command.
|
||||
protocol = _MagicTextProtocol('introducer running')
|
||||
transport = _tahoe_runner_optional_coverage(
|
||||
protocol,
|
||||
@ -278,9 +277,8 @@ log_gatherer.furl = {log_furl}
|
||||
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
|
||||
f.write(config)
|
||||
|
||||
# on windows, "tahoe start" means: run forever in the foreground,
|
||||
# but on linux it means daemonize. "tahoe run" is consistent
|
||||
# between platforms.
|
||||
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
|
||||
# "start" command.
|
||||
protocol = _MagicTextProtocol('introducer running')
|
||||
transport = _tahoe_runner_optional_coverage(
|
||||
protocol,
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,7 +1,6 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import sys
|
||||
from os import mkdir
|
||||
from os.path import join
|
||||
|
||||
import pytest
|
||||
@ -9,6 +8,14 @@ import pytest_twisted
|
||||
|
||||
import util
|
||||
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
|
||||
from allmydata.test.common import (
|
||||
write_introducer,
|
||||
)
|
||||
|
||||
# see "conftest.py" for the fixtures (e.g. "tor_network")
|
||||
|
||||
# XXX: Integration tests that involve Tor do not run reliably on
|
||||
@ -66,12 +73,12 @@ def test_onion_service_storage(reactor, request, temp_dir, flog_gatherer, tor_ne
|
||||
|
||||
@pytest_twisted.inlineCallbacks
|
||||
def _create_anonymous_node(reactor, name, control_port, request, temp_dir, flog_gatherer, tor_network, introducer_furl):
|
||||
node_dir = join(temp_dir, name)
|
||||
node_dir = FilePath(temp_dir).child(name)
|
||||
web_port = "tcp:{}:interface=localhost".format(control_port + 2000)
|
||||
|
||||
if True:
|
||||
print("creating", node_dir)
|
||||
mkdir(node_dir)
|
||||
print("creating", node_dir.path)
|
||||
node_dir.makedirs()
|
||||
proto = util._DumpOutputProtocol(None)
|
||||
reactor.spawnProcess(
|
||||
proto,
|
||||
@ -84,12 +91,15 @@ def _create_anonymous_node(reactor, name, control_port, request, temp_dir, flog_
|
||||
'--hide-ip',
|
||||
'--tor-control-port', 'tcp:localhost:{}'.format(control_port),
|
||||
'--listen', 'tor',
|
||||
node_dir,
|
||||
node_dir.path,
|
||||
)
|
||||
)
|
||||
yield proto.done
|
||||
|
||||
with open(join(node_dir, 'tahoe.cfg'), 'w') as f:
|
||||
|
||||
# Which services should this client connect to?
|
||||
write_introducer(node_dir, "default", introducer_furl)
|
||||
with node_dir.child('tahoe.cfg').open('w') as f:
|
||||
f.write('''
|
||||
[node]
|
||||
nickname = %(name)s
|
||||
@ -105,15 +115,12 @@ onion = true
|
||||
onion.private_key_file = private/tor_onion.privkey
|
||||
|
||||
[client]
|
||||
# Which services should this client connect to?
|
||||
introducer.furl = %(furl)s
|
||||
shares.needed = 1
|
||||
shares.happy = 1
|
||||
shares.total = 2
|
||||
|
||||
''' % {
|
||||
'name': name,
|
||||
'furl': introducer_furl,
|
||||
'web_port': web_port,
|
||||
'log_furl': flog_gatherer,
|
||||
'control_port': control_port,
|
||||
@ -121,5 +128,5 @@ shares.total = 2
|
||||
})
|
||||
|
||||
print("running")
|
||||
yield util._run_node(reactor, node_dir, request, None)
|
||||
yield util._run_node(reactor, node_dir.path, request, None)
|
||||
print("okay, launched")
|
||||
|
@ -6,6 +6,9 @@ from os.path import exists, join
|
||||
from six.moves import StringIO
|
||||
from functools import partial
|
||||
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.internet.defer import Deferred, succeed
|
||||
from twisted.internet.protocol import ProcessProtocol
|
||||
from twisted.internet.error import ProcessExitedAlready, ProcessDone
|
||||
@ -186,10 +189,8 @@ def _run_node(reactor, node_dir, request, magic_text):
|
||||
magic_text = "client running"
|
||||
protocol = _MagicTextProtocol(magic_text)
|
||||
|
||||
# on windows, "tahoe start" means: run forever in the foreground,
|
||||
# but on linux it means daemonize. "tahoe run" is consistent
|
||||
# between platforms.
|
||||
|
||||
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
|
||||
# "start" command.
|
||||
transport = _tahoe_runner_optional_coverage(
|
||||
protocol,
|
||||
reactor,
|
||||
@ -263,7 +264,7 @@ def _create_node(reactor, request, temp_dir, introducer_furl, flog_gatherer, nam
|
||||
u'log_gatherer.furl',
|
||||
flog_gatherer.decode("utf-8"),
|
||||
)
|
||||
write_config(config_path, config)
|
||||
write_config(FilePath(config_path), config)
|
||||
created_d.addCallback(created)
|
||||
|
||||
d = Deferred()
|
||||
|
@ -11,8 +11,12 @@ umids = {}
|
||||
|
||||
for starting_point in sys.argv[1:]:
|
||||
for root, dirs, files in os.walk(starting_point):
|
||||
for fn in [f for f in files if f.endswith(".py")]:
|
||||
fn = os.path.join(root, fn)
|
||||
for f in files:
|
||||
if not f.endswith(".py"):
|
||||
continue
|
||||
if f == "check-debugging.py":
|
||||
continue
|
||||
fn = os.path.join(root, f)
|
||||
for lineno,line in enumerate(open(fn, "r").readlines()):
|
||||
lineno = lineno+1
|
||||
mo = re.search(r"\.setDebugging\(True\)", line)
|
||||
|
1
newsfragments/3503.other
Normal file
1
newsfragments/3503.other
Normal file
@ -0,0 +1 @@
|
||||
The specification section of the Tahoe-LAFS documentation now includes explicit discussion of the security properties of Foolscap "fURLs" on which it depends.
|
1
newsfragments/3504.configuration
Normal file
1
newsfragments/3504.configuration
Normal file
@ -0,0 +1 @@
|
||||
The ``[client]introducer.furl`` configuration item is now deprecated in favor of the ``private/introducers.yaml`` file.
|
0
newsfragments/3511.minor
Normal file
0
newsfragments/3511.minor
Normal file
0
newsfragments/3514.minor
Normal file
0
newsfragments/3514.minor
Normal file
0
newsfragments/3515.minor
Normal file
0
newsfragments/3515.minor
Normal file
0
newsfragments/3520.minor
Normal file
0
newsfragments/3520.minor
Normal file
0
newsfragments/3521.minor
Normal file
0
newsfragments/3521.minor
Normal file
0
newsfragments/3522.minor
Normal file
0
newsfragments/3522.minor
Normal file
0
newsfragments/3523.minor
Normal file
0
newsfragments/3523.minor
Normal file
0
newsfragments/3524.minor
Normal file
0
newsfragments/3524.minor
Normal file
0
newsfragments/3532.minor
Normal file
0
newsfragments/3532.minor
Normal file
0
newsfragments/3533.minor
Normal file
0
newsfragments/3533.minor
Normal file
1
newsfragments/3539.bugfix
Normal file
1
newsfragments/3539.bugfix
Normal file
@ -0,0 +1 @@
|
||||
Certain implementation-internal weakref KeyErrors are now handled and should no longer cause user-initiated operations to fail.
|
0
newsfragments/3542.minor
Normal file
0
newsfragments/3542.minor
Normal file
0
newsfragments/3544.minor
Normal file
0
newsfragments/3544.minor
Normal file
1
newsfragments/3545.other
Normal file
1
newsfragments/3545.other
Normal file
@ -0,0 +1 @@
|
||||
The README, revised by Viktoriia with feedback from the team, is now more focused on the developer community and provides more information about Tahoe-LAFS, why it's important, and how someone can use it or start contributing to it.
|
0
newsfragments/3546.minor
Normal file
0
newsfragments/3546.minor
Normal file
0
newsfragments/3547.minor
Normal file
0
newsfragments/3547.minor
Normal file
1
newsfragments/3549.removed
Normal file
1
newsfragments/3549.removed
Normal file
@ -0,0 +1 @@
|
||||
The stats gatherer, broken since at least Tahoe-LAFS 1.13.0, has been removed. The ``[client]stats_gatherer.furl`` configuration item in ``tahoe.cfg`` is no longer allowed. The Tahoe-LAFS project recommends using a third-party metrics aggregation tool instead.
|
1
newsfragments/3550.removed
Normal file
1
newsfragments/3550.removed
Normal file
@ -0,0 +1 @@
|
||||
The deprecated ``tahoe`` start, restart, stop, and daemonize sub-commands have been removed.
|
0
newsfragments/3551.minor
Normal file
0
newsfragments/3551.minor
Normal file
0
newsfragments/3552.minor
Normal file
0
newsfragments/3552.minor
Normal file
0
newsfragments/3553.minor
Normal file
0
newsfragments/3553.minor
Normal file
0
newsfragments/3555.minor
Normal file
0
newsfragments/3555.minor
Normal file
0
newsfragments/3557.minor
Normal file
0
newsfragments/3557.minor
Normal file
0
newsfragments/3558.minor
Normal file
0
newsfragments/3558.minor
Normal file
0
newsfragments/3560.minor
Normal file
0
newsfragments/3560.minor
Normal file
0
newsfragments/3564.minor
Normal file
0
newsfragments/3564.minor
Normal file
0
newsfragments/3565.minor
Normal file
0
newsfragments/3565.minor
Normal file
0
newsfragments/3567.minor
Normal file
0
newsfragments/3567.minor
Normal file
0
newsfragments/3568.minor
Normal file
0
newsfragments/3568.minor
Normal file
0
newsfragments/3572.minor
Normal file
0
newsfragments/3572.minor
Normal file
@ -23,21 +23,12 @@ python.pkgs.buildPythonPackage rec {
|
||||
# This list is over-zealous because it's more work to disable individual
|
||||
# tests with in a module.
|
||||
|
||||
# test_system is a lot of integration-style tests that do a lot of real
|
||||
# networking between many processes. They sometimes fail spuriously.
|
||||
rm src/allmydata/test/test_system.py
|
||||
|
||||
# Many of these tests don't properly skip when i2p or tor dependencies are
|
||||
# not supplied (and we are not supplying them).
|
||||
rm src/allmydata/test/test_i2p_provider.py
|
||||
rm src/allmydata/test/test_connections.py
|
||||
rm src/allmydata/test/cli/test_create.py
|
||||
rm src/allmydata/test/test_client.py
|
||||
rm src/allmydata/test/test_runner.py
|
||||
|
||||
# Some eliot code changes behavior based on whether stdout is a tty or not
|
||||
# and fails when it is not.
|
||||
rm src/allmydata/test/test_eliotutil.py
|
||||
'';
|
||||
|
||||
|
||||
|
4
setup.py
4
setup.py
@ -111,7 +111,9 @@ install_requires = [
|
||||
|
||||
# Eliot is contemplating dropping Python 2 support. Stick to a version we
|
||||
# know works on Python 2.7.
|
||||
"eliot ~= 1.7",
|
||||
"eliot ~= 1.7 ; python_version < '3.0'",
|
||||
# On Python 3, we want a new enough version to support custom JSON encoders.
|
||||
"eliot >= 1.13.0 ; python_version > '3.0'",
|
||||
|
||||
# Pyrsistent 0.17.0 (which we use by way of Eliot) has dropped
|
||||
# Python 2 entirely; stick to the version known to work for us.
|
||||
|
@ -1,8 +1,8 @@
|
||||
from past.builtins import unicode
|
||||
|
||||
import os, stat, time, weakref
|
||||
from base64 import urlsafe_b64encode
|
||||
from functools import partial
|
||||
from errno import ENOENT, EPERM
|
||||
|
||||
# On Python 2 this will be the backported package:
|
||||
from configparser import NoSectionError
|
||||
|
||||
@ -84,7 +84,6 @@ _client_config = configutil.ValidConfiguration(
|
||||
"shares.happy",
|
||||
"shares.needed",
|
||||
"shares.total",
|
||||
"stats_gatherer.furl",
|
||||
"storage.plugins",
|
||||
),
|
||||
"ftpd": (
|
||||
@ -271,7 +270,7 @@ def create_client_from_config(config, _client_factory=None, _introducer_factory=
|
||||
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
handlers = node.create_connection_handlers(reactor, config, i2p_provider, tor_provider)
|
||||
handlers = node.create_connection_handlers(config, i2p_provider, tor_provider)
|
||||
default_connection_handlers, foolscap_connection_handlers = handlers
|
||||
tub_options = node.create_tub_options(config)
|
||||
|
||||
@ -465,56 +464,17 @@ def create_introducer_clients(config, main_tub, _introducer_factory=None):
|
||||
# we return this list
|
||||
introducer_clients = []
|
||||
|
||||
introducers_yaml_filename = config.get_private_path("introducers.yaml")
|
||||
introducers_filepath = FilePath(introducers_yaml_filename)
|
||||
introducers = config.get_introducer_configuration()
|
||||
|
||||
try:
|
||||
with introducers_filepath.open() as f:
|
||||
introducers_yaml = yamlutil.safe_load(f)
|
||||
if introducers_yaml is None:
|
||||
raise EnvironmentError(
|
||||
EPERM,
|
||||
"Can't read '{}'".format(introducers_yaml_filename),
|
||||
introducers_yaml_filename,
|
||||
)
|
||||
introducers = introducers_yaml.get("introducers", {})
|
||||
log.msg(
|
||||
"found {} introducers in private/introducers.yaml".format(
|
||||
len(introducers),
|
||||
)
|
||||
)
|
||||
except EnvironmentError as e:
|
||||
if e.errno != ENOENT:
|
||||
raise
|
||||
introducers = {}
|
||||
|
||||
if "default" in introducers.keys():
|
||||
raise ValueError(
|
||||
"'default' introducer furl cannot be specified in introducers.yaml;"
|
||||
" please fix impossible configuration."
|
||||
)
|
||||
|
||||
# read furl from tahoe.cfg
|
||||
tahoe_cfg_introducer_furl = config.get_config("client", "introducer.furl", None)
|
||||
if tahoe_cfg_introducer_furl == "None":
|
||||
raise ValueError(
|
||||
"tahoe.cfg has invalid 'introducer.furl = None':"
|
||||
" to disable it, use 'introducer.furl ='"
|
||||
" or omit the key entirely"
|
||||
)
|
||||
if tahoe_cfg_introducer_furl:
|
||||
introducers[u'default'] = {'furl':tahoe_cfg_introducer_furl}
|
||||
|
||||
for petname, introducer in introducers.items():
|
||||
introducer_cache_filepath = FilePath(config.get_private_path("introducer_{}_cache.yaml".format(petname)))
|
||||
for petname, (furl, cache_path) in introducers.items():
|
||||
ic = _introducer_factory(
|
||||
main_tub,
|
||||
introducer['furl'].encode("ascii"),
|
||||
furl.encode("ascii"),
|
||||
config.nickname,
|
||||
str(allmydata.__full_version__),
|
||||
str(_Client.OLDEST_SUPPORTED_VERSION),
|
||||
partial(_sequencer, config),
|
||||
introducer_cache_filepath,
|
||||
cache_path,
|
||||
)
|
||||
introducer_clients.append(ic)
|
||||
return introducer_clients
|
||||
@ -716,11 +676,7 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
self.init_web(webport) # strports string
|
||||
|
||||
def init_stats_provider(self):
|
||||
gatherer_furl = self.config.get_config("client", "stats_gatherer.furl", None)
|
||||
if gatherer_furl:
|
||||
# FURLs should be bytes:
|
||||
gatherer_furl = gatherer_furl.encode("utf-8")
|
||||
self.stats_provider = StatsProvider(self, gatherer_furl)
|
||||
self.stats_provider = StatsProvider(self)
|
||||
self.stats_provider.setServiceParent(self)
|
||||
self.stats_provider.register_producer(self)
|
||||
|
||||
@ -728,10 +684,14 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
return { 'node.uptime': time.time() - self.started_timestamp }
|
||||
|
||||
def init_secrets(self):
|
||||
lease_s = self.config.get_or_create_private_config("secret", _make_secret)
|
||||
# configs are always unicode
|
||||
def _unicode_make_secret():
|
||||
return unicode(_make_secret(), "ascii")
|
||||
lease_s = self.config.get_or_create_private_config(
|
||||
"secret", _unicode_make_secret).encode("utf-8")
|
||||
lease_secret = base32.a2b(lease_s)
|
||||
convergence_s = self.config.get_or_create_private_config('convergence',
|
||||
_make_secret)
|
||||
convergence_s = self.config.get_or_create_private_config(
|
||||
'convergence', _unicode_make_secret).encode("utf-8")
|
||||
self.convergence = base32.a2b(convergence_s)
|
||||
self._secret_holder = SecretHolder(lease_secret, self.convergence)
|
||||
|
||||
@ -740,9 +700,11 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
# existing key
|
||||
def _make_key():
|
||||
private_key, _ = ed25519.create_signing_keypair()
|
||||
return ed25519.string_from_signing_key(private_key) + b"\n"
|
||||
# Config values are always unicode:
|
||||
return unicode(ed25519.string_from_signing_key(private_key) + b"\n", "utf-8")
|
||||
|
||||
private_key_str = self.config.get_or_create_private_config("node.privkey", _make_key)
|
||||
private_key_str = self.config.get_or_create_private_config(
|
||||
"node.privkey", _make_key).encode("utf-8")
|
||||
private_key, public_key = ed25519.signing_keypair_from_string(private_key_str)
|
||||
public_key_str = ed25519.string_from_verifying_key(public_key)
|
||||
self.config.write_config_file("node.pubkey", public_key_str + b"\n", "wb")
|
||||
@ -752,7 +714,7 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
def get_long_nodeid(self):
|
||||
# this matches what IServer.get_longname() says about us elsewhere
|
||||
vk_string = ed25519.string_from_verifying_key(self._node_public_key)
|
||||
return remove_prefix(vk_string, "pub-")
|
||||
return remove_prefix(vk_string, b"pub-")
|
||||
|
||||
def get_long_tubid(self):
|
||||
return idlib.nodeid_b2a(self.nodeid)
|
||||
@ -936,10 +898,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
if helper_furl in ("None", ""):
|
||||
helper_furl = None
|
||||
|
||||
# FURLs need to be bytes:
|
||||
if helper_furl is not None:
|
||||
helper_furl = helper_furl.encode("utf-8")
|
||||
|
||||
DEP = self.encoding_params
|
||||
DEP["k"] = int(self.config.get_config("client", "shares.needed", DEP["k"]))
|
||||
DEP["n"] = int(self.config.get_config("client", "shares.total", DEP["n"]))
|
||||
@ -1092,7 +1050,7 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
if accountfile:
|
||||
accountfile = self.config.get_config_path(accountfile)
|
||||
accounturl = self.config.get_config("sftpd", "accounts.url", None)
|
||||
sftp_portstr = self.config.get_config("sftpd", "port", "8022")
|
||||
sftp_portstr = self.config.get_config("sftpd", "port", "tcp:8022")
|
||||
pubkey_file = self.config.get_config("sftpd", "host_pubkey_file")
|
||||
privkey_file = self.config.get_config("sftpd", "host_privkey_file")
|
||||
|
||||
|
@ -1,4 +1,16 @@
|
||||
"""Directory Node implementation."""
|
||||
"""Directory Node implementation.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
# Skip dict so it doesn't break things.
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import unicode
|
||||
|
||||
import time
|
||||
@ -37,6 +49,8 @@ from eliot.twisted import (
|
||||
|
||||
NAME = Field.for_types(
|
||||
u"name",
|
||||
# Make sure this works on Python 2; with str, it gets Future str which
|
||||
# breaks Eliot.
|
||||
[unicode],
|
||||
u"The name linking the parent to this node.",
|
||||
)
|
||||
@ -179,7 +193,7 @@ class Adder(object):
|
||||
def modify(self, old_contents, servermap, first_time):
|
||||
children = self.node._unpack_contents(old_contents)
|
||||
now = time.time()
|
||||
for (namex, (child, new_metadata)) in self.entries.iteritems():
|
||||
for (namex, (child, new_metadata)) in list(self.entries.items()):
|
||||
name = normalize(namex)
|
||||
precondition(IFilesystemNode.providedBy(child), child)
|
||||
|
||||
@ -205,8 +219,8 @@ class Adder(object):
|
||||
return new_contents
|
||||
|
||||
def _encrypt_rw_uri(writekey, rw_uri):
|
||||
precondition(isinstance(rw_uri, str), rw_uri)
|
||||
precondition(isinstance(writekey, str), writekey)
|
||||
precondition(isinstance(rw_uri, bytes), rw_uri)
|
||||
precondition(isinstance(writekey, bytes), writekey)
|
||||
|
||||
salt = hashutil.mutable_rwcap_salt_hash(rw_uri)
|
||||
key = hashutil.mutable_rwcap_key_hash(salt, writekey)
|
||||
@ -221,7 +235,7 @@ def _encrypt_rw_uri(writekey, rw_uri):
|
||||
def pack_children(childrenx, writekey, deep_immutable=False):
|
||||
# initial_children must have metadata (i.e. {} instead of None)
|
||||
children = {}
|
||||
for (namex, (node, metadata)) in childrenx.iteritems():
|
||||
for (namex, (node, metadata)) in list(childrenx.items()):
|
||||
precondition(isinstance(metadata, dict),
|
||||
"directory creation requires metadata to be a dict, not None", metadata)
|
||||
children[normalize(namex)] = (node, metadata)
|
||||
@ -245,18 +259,19 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
|
||||
If deep_immutable is True, I will require that all my children are deeply
|
||||
immutable, and will raise a MustBeDeepImmutableError if not.
|
||||
"""
|
||||
precondition((writekey is None) or isinstance(writekey, str), writekey)
|
||||
precondition((writekey is None) or isinstance(writekey, bytes), writekey)
|
||||
|
||||
has_aux = isinstance(children, AuxValueDict)
|
||||
entries = []
|
||||
for name in sorted(children.keys()):
|
||||
assert isinstance(name, unicode)
|
||||
assert isinstance(name, str)
|
||||
entry = None
|
||||
(child, metadata) = children[name]
|
||||
child.raise_error()
|
||||
if deep_immutable and not child.is_allowed_in_immutable_directory():
|
||||
raise MustBeDeepImmutableError("child %s is not allowed in an immutable directory" %
|
||||
quote_output(name, encoding='utf-8'), name)
|
||||
raise MustBeDeepImmutableError(
|
||||
"child %r is not allowed in an immutable directory" % (name,),
|
||||
name)
|
||||
if has_aux:
|
||||
entry = children.get_aux(name)
|
||||
if not entry:
|
||||
@ -264,26 +279,26 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
|
||||
assert isinstance(metadata, dict)
|
||||
rw_uri = child.get_write_uri()
|
||||
if rw_uri is None:
|
||||
rw_uri = ""
|
||||
assert isinstance(rw_uri, str), rw_uri
|
||||
rw_uri = b""
|
||||
assert isinstance(rw_uri, bytes), rw_uri
|
||||
|
||||
# should be prevented by MustBeDeepImmutableError check above
|
||||
assert not (rw_uri and deep_immutable)
|
||||
|
||||
ro_uri = child.get_readonly_uri()
|
||||
if ro_uri is None:
|
||||
ro_uri = ""
|
||||
assert isinstance(ro_uri, str), ro_uri
|
||||
ro_uri = b""
|
||||
assert isinstance(ro_uri, bytes), ro_uri
|
||||
if writekey is not None:
|
||||
writecap = netstring(_encrypt_rw_uri(writekey, rw_uri))
|
||||
else:
|
||||
writecap = ZERO_LEN_NETSTR
|
||||
entry = "".join([netstring(name.encode("utf-8")),
|
||||
entry = b"".join([netstring(name.encode("utf-8")),
|
||||
netstring(strip_prefix_for_ro(ro_uri, deep_immutable)),
|
||||
writecap,
|
||||
netstring(json.dumps(metadata))])
|
||||
netstring(json.dumps(metadata).encode("utf-8"))])
|
||||
entries.append(netstring(entry))
|
||||
return "".join(entries)
|
||||
return b"".join(entries)
|
||||
|
||||
@implementer(IDirectoryNode, ICheckable, IDeepCheckable)
|
||||
class DirectoryNode(object):
|
||||
@ -352,9 +367,9 @@ class DirectoryNode(object):
|
||||
# cleartext. The 'name' is UTF-8 encoded, and should be normalized to NFC.
|
||||
# The rwcapdata is formatted as:
|
||||
# pack("16ss32s", iv, AES(H(writekey+iv), plaintext_rw_uri), mac)
|
||||
assert isinstance(data, str), (repr(data), type(data))
|
||||
assert isinstance(data, bytes), (repr(data), type(data))
|
||||
# an empty directory is serialized as an empty string
|
||||
if data == "":
|
||||
if data == b"":
|
||||
return AuxValueDict()
|
||||
writeable = not self.is_readonly()
|
||||
mutable = self.is_mutable()
|
||||
@ -373,7 +388,7 @@ class DirectoryNode(object):
|
||||
# Therefore we normalize names going both in and out of directories.
|
||||
name = normalize(namex_utf8.decode("utf-8"))
|
||||
|
||||
rw_uri = ""
|
||||
rw_uri = b""
|
||||
if writeable:
|
||||
rw_uri = self._decrypt_rwcapdata(rwcapdata)
|
||||
|
||||
@ -384,8 +399,8 @@ class DirectoryNode(object):
|
||||
# ro_uri is treated in the same way for consistency.
|
||||
# rw_uri and ro_uri will be either None or a non-empty string.
|
||||
|
||||
rw_uri = rw_uri.rstrip(' ') or None
|
||||
ro_uri = ro_uri.rstrip(' ') or None
|
||||
rw_uri = rw_uri.rstrip(b' ') or None
|
||||
ro_uri = ro_uri.rstrip(b' ') or None
|
||||
|
||||
try:
|
||||
child = self._create_and_validate_node(rw_uri, ro_uri, name)
|
||||
@ -468,7 +483,7 @@ class DirectoryNode(object):
|
||||
exists a child of the given name, False if not."""
|
||||
name = normalize(namex)
|
||||
d = self._read()
|
||||
d.addCallback(lambda children: children.has_key(name))
|
||||
d.addCallback(lambda children: name in children)
|
||||
return d
|
||||
|
||||
def _get(self, children, name):
|
||||
@ -543,7 +558,7 @@ class DirectoryNode(object):
|
||||
else:
|
||||
pathx = pathx.split("/")
|
||||
for p in pathx:
|
||||
assert isinstance(p, unicode), p
|
||||
assert isinstance(p, str), p
|
||||
childnamex = pathx[0]
|
||||
remaining_pathx = pathx[1:]
|
||||
if remaining_pathx:
|
||||
@ -555,8 +570,8 @@ class DirectoryNode(object):
|
||||
return d
|
||||
|
||||
def set_uri(self, namex, writecap, readcap=None, metadata=None, overwrite=True):
|
||||
precondition(isinstance(writecap, (str,type(None))), writecap)
|
||||
precondition(isinstance(readcap, (str,type(None))), readcap)
|
||||
precondition(isinstance(writecap, (bytes, type(None))), writecap)
|
||||
precondition(isinstance(readcap, (bytes, type(None))), readcap)
|
||||
|
||||
# We now allow packing unknown nodes, provided they are valid
|
||||
# for this type of directory.
|
||||
@ -569,16 +584,16 @@ class DirectoryNode(object):
|
||||
# this takes URIs
|
||||
a = Adder(self, overwrite=overwrite,
|
||||
create_readonly_node=self._create_readonly_node)
|
||||
for (namex, e) in entries.iteritems():
|
||||
assert isinstance(namex, unicode), namex
|
||||
for (namex, e) in entries.items():
|
||||
assert isinstance(namex, str), namex
|
||||
if len(e) == 2:
|
||||
writecap, readcap = e
|
||||
metadata = None
|
||||
else:
|
||||
assert len(e) == 3
|
||||
writecap, readcap, metadata = e
|
||||
precondition(isinstance(writecap, (str,type(None))), writecap)
|
||||
precondition(isinstance(readcap, (str,type(None))), readcap)
|
||||
precondition(isinstance(writecap, (bytes,type(None))), writecap)
|
||||
precondition(isinstance(readcap, (bytes,type(None))), readcap)
|
||||
|
||||
# We now allow packing unknown nodes, provided they are valid
|
||||
# for this type of directory.
|
||||
@ -779,7 +794,7 @@ class DirectoryNode(object):
|
||||
# in the nodecache) seem to consume about 2000 bytes.
|
||||
dirkids = []
|
||||
filekids = []
|
||||
for name, (child, metadata) in sorted(children.iteritems()):
|
||||
for name, (child, metadata) in sorted(children.items()):
|
||||
childpath = path + [name]
|
||||
if isinstance(child, UnknownNode):
|
||||
walker.add_node(child, childpath)
|
||||
|
@ -1974,6 +1974,8 @@ class Dispatcher(object):
|
||||
|
||||
|
||||
class SFTPServer(service.MultiService):
|
||||
name = "frontend:sftp"
|
||||
|
||||
def __init__(self, client, accountfile, accounturl,
|
||||
sftp_portstr, pubkey_file, privkey_file):
|
||||
precondition(isinstance(accountfile, (unicode, type(None))), accountfile)
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from foolscap.api import DeadReferenceError, RemoteException
|
||||
|
@ -9,6 +9,7 @@ from __future__ import unicode_literals
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from six import ensure_str
|
||||
|
||||
import time
|
||||
now = time.time
|
||||
@ -98,7 +99,7 @@ class ShareFinder(object):
|
||||
|
||||
# internal methods
|
||||
def loop(self):
|
||||
pending_s = ",".join([rt.server.get_name()
|
||||
pending_s = ",".join([ensure_str(rt.server.get_name())
|
||||
for rt in self.pending_requests]) # sort?
|
||||
self.log(format="ShareFinder loop: running=%(running)s"
|
||||
" hungry=%(hungry)s, pending=%(pending)s",
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from allmydata.storage.server import si_b2a
|
||||
|
@ -11,6 +11,7 @@ from future.utils import PY2, native_str
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import long, unicode
|
||||
from six import ensure_str
|
||||
|
||||
import os, time, weakref, itertools
|
||||
from zope.interface import implementer
|
||||
@ -1828,7 +1829,7 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
|
||||
def startService(self):
|
||||
service.MultiService.startService(self)
|
||||
if self._helper_furl:
|
||||
self.parent.tub.connectTo(self._helper_furl,
|
||||
self.parent.tub.connectTo(ensure_str(self._helper_furl),
|
||||
self._got_helper)
|
||||
|
||||
def _got_helper(self, helper):
|
||||
|
@ -521,7 +521,6 @@ class IStorageBroker(Interface):
|
||||
oldest_supported: the peer's oldest supported version, same
|
||||
|
||||
rref: the RemoteReference, if connected, otherwise None
|
||||
remote_host: the IAddress, if connected, otherwise None
|
||||
|
||||
This method is intended for monitoring interfaces, such as a web page
|
||||
that describes connecting and connected peers.
|
||||
@ -2921,38 +2920,6 @@ class RIHelper(RemoteInterface):
|
||||
return (UploadResults, ChoiceOf(RICHKUploadHelper, None))
|
||||
|
||||
|
||||
class RIStatsProvider(RemoteInterface):
|
||||
__remote_name__ = native_str("RIStatsProvider.tahoe.allmydata.com")
|
||||
"""
|
||||
Provides access to statistics and monitoring information.
|
||||
"""
|
||||
|
||||
def get_stats():
|
||||
"""
|
||||
returns a dictionary containing 'counters' and 'stats', each a
|
||||
dictionary with string counter/stat name keys, and numeric or None values.
|
||||
counters are monotonically increasing measures of work done, and
|
||||
stats are instantaneous measures (potentially time averaged
|
||||
internally)
|
||||
"""
|
||||
return DictOf(bytes, DictOf(bytes, ChoiceOf(float, int, long, None)))
|
||||
|
||||
|
||||
class RIStatsGatherer(RemoteInterface):
|
||||
__remote_name__ = native_str("RIStatsGatherer.tahoe.allmydata.com")
|
||||
"""
|
||||
Provides a monitoring service for centralised collection of stats
|
||||
"""
|
||||
|
||||
def provide(provider=RIStatsProvider, nickname=bytes):
|
||||
"""
|
||||
@param provider: a stats collector instance that should be polled
|
||||
periodically by the gatherer to collect stats.
|
||||
@param nickname: a name useful to identify the provided client
|
||||
"""
|
||||
return None
|
||||
|
||||
|
||||
class IStatsProducer(Interface):
|
||||
def get_stats():
|
||||
"""
|
||||
@ -3163,3 +3130,24 @@ class IAnnounceableStorageServer(Interface):
|
||||
:type: ``IReferenceable`` provider
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
class IAddressFamily(Interface):
|
||||
"""
|
||||
Support for one specific address family.
|
||||
|
||||
This stretches the definition of address family to include things like Tor
|
||||
and I2P.
|
||||
"""
|
||||
def get_listener():
|
||||
"""
|
||||
Return a string endpoint description or an ``IStreamServerEndpoint``.
|
||||
|
||||
This would be named ``get_server_endpoint`` if not for historical
|
||||
reasons.
|
||||
"""
|
||||
|
||||
def get_client_endpoint():
|
||||
"""
|
||||
Return an ``IStreamClientEndpoint``.
|
||||
"""
|
||||
|
@ -1,4 +1,17 @@
|
||||
from past.builtins import unicode
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import long
|
||||
|
||||
from six import ensure_text, ensure_str
|
||||
|
||||
import time
|
||||
from zope.interface import implementer
|
||||
@ -17,7 +30,7 @@ from allmydata.util.assertutil import precondition
|
||||
class InvalidCacheError(Exception):
|
||||
pass
|
||||
|
||||
V2 = "http://allmydata.org/tahoe/protocols/introducer/v2"
|
||||
V2 = b"http://allmydata.org/tahoe/protocols/introducer/v2"
|
||||
|
||||
@implementer(RIIntroducerSubscriberClient_v2, IIntroducerClient)
|
||||
class IntroducerClient(service.Service, Referenceable):
|
||||
@ -28,18 +41,18 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._tub = tub
|
||||
self.introducer_furl = introducer_furl
|
||||
|
||||
assert type(nickname) is unicode
|
||||
assert isinstance(nickname, str)
|
||||
self._nickname = nickname
|
||||
self._my_version = my_version
|
||||
self._oldest_supported = oldest_supported
|
||||
self._sequencer = sequencer
|
||||
self._cache_filepath = cache_filepath
|
||||
|
||||
self._my_subscriber_info = { "version": 0,
|
||||
"nickname": self._nickname,
|
||||
"app-versions": [],
|
||||
"my-version": self._my_version,
|
||||
"oldest-supported": self._oldest_supported,
|
||||
self._my_subscriber_info = { b"version": 0,
|
||||
b"nickname": self._nickname,
|
||||
b"app-versions": [],
|
||||
b"my-version": self._my_version,
|
||||
b"oldest-supported": self._oldest_supported,
|
||||
}
|
||||
|
||||
self._outbound_announcements = {} # not signed
|
||||
@ -81,7 +94,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
def startService(self):
|
||||
service.Service.startService(self)
|
||||
self._introducer_error = None
|
||||
rc = self._tub.connectTo(self.introducer_furl, self._got_introducer)
|
||||
rc = self._tub.connectTo(ensure_str(self.introducer_furl), self._got_introducer)
|
||||
self._introducer_reconnector = rc
|
||||
def connect_failed(failure):
|
||||
self.log("Initial Introducer connection failed: perhaps it's down",
|
||||
@ -111,21 +124,26 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
|
||||
def _save_announcements(self):
|
||||
announcements = []
|
||||
for _, value in self._inbound_announcements.items():
|
||||
for value in self._inbound_announcements.values():
|
||||
ann, key_s, time_stamp = value
|
||||
# On Python 2, bytes strings are encoded into YAML Unicode strings.
|
||||
# On Python 3, bytes are encoded as YAML bytes. To minimize
|
||||
# changes, Python 3 for now ensures the same is true.
|
||||
server_params = {
|
||||
"ann" : ann,
|
||||
"key_s" : key_s,
|
||||
"key_s" : ensure_text(key_s),
|
||||
}
|
||||
announcements.append(server_params)
|
||||
announcement_cache_yaml = yamlutil.safe_dump(announcements)
|
||||
if isinstance(announcement_cache_yaml, str):
|
||||
announcement_cache_yaml = announcement_cache_yaml.encode("utf-8")
|
||||
self._cache_filepath.setContent(announcement_cache_yaml)
|
||||
|
||||
def _got_introducer(self, publisher):
|
||||
self.log("connected to introducer, getting versions")
|
||||
default = { "http://allmydata.org/tahoe/protocols/introducer/v1":
|
||||
default = { b"http://allmydata.org/tahoe/protocols/introducer/v1":
|
||||
{ },
|
||||
"application-version": "unknown: no get_version()",
|
||||
b"application-version": b"unknown: no get_version()",
|
||||
}
|
||||
d = add_version_to_remote_reference(publisher, default)
|
||||
d.addCallback(self._got_versioned_introducer)
|
||||
@ -138,6 +156,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
def _got_versioned_introducer(self, publisher):
|
||||
self.log("got introducer version: %s" % (publisher.version,))
|
||||
# we require an introducer that speaks at least V2
|
||||
assert all(type(V2) == type(v) for v in publisher.version)
|
||||
if V2 not in publisher.version:
|
||||
raise InsufficientVersionError("V2", publisher.version)
|
||||
self._publisher = publisher
|
||||
@ -161,8 +180,8 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._local_subscribers.append( (service_name,callback,args,kwargs) )
|
||||
self._subscribed_service_names.add(service_name)
|
||||
self._maybe_subscribe()
|
||||
for index,(ann,key_s,when) in self._inbound_announcements.items():
|
||||
precondition(isinstance(key_s, str), key_s)
|
||||
for index,(ann,key_s,when) in list(self._inbound_announcements.items()):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
servicename = index[0]
|
||||
if servicename == service_name:
|
||||
eventually(callback, key_s, ann, *args, **kwargs)
|
||||
@ -206,7 +225,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._outbound_announcements[service_name] = ann_d
|
||||
|
||||
# publish all announcements with the new seqnum and nonce
|
||||
for service_name,ann_d in self._outbound_announcements.items():
|
||||
for service_name,ann_d in list(self._outbound_announcements.items()):
|
||||
ann_d["seqnum"] = current_seqnum
|
||||
ann_d["nonce"] = current_nonce
|
||||
ann_t = sign_to_foolscap(ann_d, signing_key)
|
||||
@ -218,7 +237,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self.log("want to publish, but no introducer yet", level=log.NOISY)
|
||||
return
|
||||
# this re-publishes everything. The Introducer ignores duplicates
|
||||
for ann_t in self._published_announcements.values():
|
||||
for ann_t in list(self._published_announcements.values()):
|
||||
self._debug_counts["outbound_message"] += 1
|
||||
self._debug_outstanding += 1
|
||||
d = self._publisher.callRemote("publish_v2", ann_t, self._canary)
|
||||
@ -238,7 +257,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
# this might raise UnknownKeyError or bad-sig error
|
||||
ann, key_s = unsign_from_foolscap(ann_t)
|
||||
# key is "v0-base32abc123"
|
||||
precondition(isinstance(key_s, str), key_s)
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
except BadSignature:
|
||||
self.log("bad signature on inbound announcement: %s" % (ann_t,),
|
||||
parent=lp, level=log.WEIRD, umid="ZAU15Q")
|
||||
@ -248,7 +267,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._process_announcement(ann, key_s)
|
||||
|
||||
def _process_announcement(self, ann, key_s):
|
||||
precondition(isinstance(key_s, str), key_s)
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
self._debug_counts["inbound_announcement"] += 1
|
||||
service_name = str(ann["service-name"])
|
||||
if service_name not in self._subscribed_service_names:
|
||||
@ -257,8 +276,8 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._debug_counts["wrong_service"] += 1
|
||||
return
|
||||
# for ASCII values, simplejson might give us unicode *or* bytes
|
||||
if "nickname" in ann and isinstance(ann["nickname"], str):
|
||||
ann["nickname"] = unicode(ann["nickname"])
|
||||
if "nickname" in ann and isinstance(ann["nickname"], bytes):
|
||||
ann["nickname"] = str(ann["nickname"])
|
||||
nick_s = ann.get("nickname",u"").encode("utf-8")
|
||||
lp2 = self.log(format="announcement for nickname '%(nick)s', service=%(svc)s: %(ann)s",
|
||||
nick=nick_s, svc=service_name, ann=ann, umid="BoKEag")
|
||||
@ -266,11 +285,11 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
# how do we describe this node in the logs?
|
||||
desc_bits = []
|
||||
assert key_s
|
||||
desc_bits.append("serverid=" + key_s[:20])
|
||||
desc_bits.append(b"serverid=" + key_s[:20])
|
||||
if "anonymous-storage-FURL" in ann:
|
||||
tubid_s = get_tubid_string_from_ann(ann)
|
||||
desc_bits.append("tubid=" + tubid_s[:8])
|
||||
description = "/".join(desc_bits)
|
||||
desc_bits.append(b"tubid=" + tubid_s[:8])
|
||||
description = b"/".join(desc_bits)
|
||||
|
||||
# the index is used to track duplicates
|
||||
index = (service_name, key_s)
|
||||
@ -320,7 +339,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._deliver_announcements(key_s, ann)
|
||||
|
||||
def _deliver_announcements(self, key_s, ann):
|
||||
precondition(isinstance(key_s, str), key_s)
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
service_name = str(ann["service-name"])
|
||||
for (service_name2,cb,args,kwargs) in self._local_subscribers:
|
||||
if service_name2 == service_name:
|
||||
|
@ -1,18 +1,29 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import re
|
||||
import json
|
||||
from allmydata.crypto.util import remove_prefix
|
||||
from allmydata.crypto import ed25519
|
||||
from allmydata.util import base32, rrefutil
|
||||
from allmydata.util import base32, rrefutil, jsonbytes as json
|
||||
|
||||
|
||||
def get_tubid_string_from_ann(ann):
|
||||
return get_tubid_string(str(ann.get("anonymous-storage-FURL")
|
||||
or ann.get("FURL")))
|
||||
furl = ann.get("anonymous-storage-FURL") or ann.get("FURL")
|
||||
return get_tubid_string(furl)
|
||||
|
||||
def get_tubid_string(furl):
|
||||
m = re.match(r'pb://(\w+)@', furl)
|
||||
assert m
|
||||
return m.group(1).lower()
|
||||
return m.group(1).lower().encode("ascii")
|
||||
|
||||
|
||||
def sign_to_foolscap(announcement, signing_key):
|
||||
|
@ -1,3 +1,18 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import long
|
||||
from six import ensure_text
|
||||
|
||||
import time, os.path, textwrap
|
||||
from zope.interface import implementer
|
||||
@ -7,7 +22,7 @@ from twisted.python.failure import Failure
|
||||
from foolscap.api import Referenceable
|
||||
import allmydata
|
||||
from allmydata import node
|
||||
from allmydata.util import log, rrefutil
|
||||
from allmydata.util import log, rrefutil, dictutil
|
||||
from allmydata.util.i2p_provider import create as create_i2p_provider
|
||||
from allmydata.util.tor_provider import create as create_tor_provider
|
||||
from allmydata.introducer.interfaces import \
|
||||
@ -55,7 +70,7 @@ def create_introducer(basedir=u"."):
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
|
||||
default_connection_handlers, foolscap_connection_handlers = create_connection_handlers(reactor, config, i2p_provider, tor_provider)
|
||||
default_connection_handlers, foolscap_connection_handlers = create_connection_handlers(config, i2p_provider, tor_provider)
|
||||
tub_options = create_tub_options(config)
|
||||
|
||||
# we don't remember these because the Introducer doesn't make
|
||||
@ -122,7 +137,7 @@ class _IntroducerNode(node.Node):
|
||||
|
||||
from allmydata.webish import IntroducerWebishServer
|
||||
nodeurl_path = self.config.get_config_path(u"node.url")
|
||||
config_staticdir = self.get_config("node", "web.static", "public_html").decode('utf-8')
|
||||
config_staticdir = self.get_config("node", "web.static", "public_html")
|
||||
staticdir = self.config.get_config_path(config_staticdir)
|
||||
ws = IntroducerWebishServer(self, webport, nodeurl_path, staticdir)
|
||||
ws.setServiceParent(self)
|
||||
@ -133,8 +148,8 @@ class IntroducerService(service.MultiService, Referenceable):
|
||||
# v1 is the original protocol, added in 1.0 (but only advertised starting
|
||||
# in 1.3), removed in 1.12. v2 is the new signed protocol, added in 1.10
|
||||
VERSION = { #"http://allmydata.org/tahoe/protocols/introducer/v1": { },
|
||||
"http://allmydata.org/tahoe/protocols/introducer/v2": { },
|
||||
"application-version": str(allmydata.__full_version__),
|
||||
b"http://allmydata.org/tahoe/protocols/introducer/v2": { },
|
||||
b"application-version": allmydata.__full_version__.encode("utf-8"),
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
@ -155,7 +170,7 @@ class IntroducerService(service.MultiService, Referenceable):
|
||||
# 'subscriber_info' is a dict, provided directly by v2 clients. The
|
||||
# expected keys are: version, nickname, app-versions, my-version,
|
||||
# oldest-supported
|
||||
self._subscribers = {}
|
||||
self._subscribers = dictutil.UnicodeKeyDict({})
|
||||
|
||||
self._debug_counts = {"inbound_message": 0,
|
||||
"inbound_duplicate": 0,
|
||||
@ -179,7 +194,7 @@ class IntroducerService(service.MultiService, Referenceable):
|
||||
def get_announcements(self):
|
||||
"""Return a list of AnnouncementDescriptor for all announcements"""
|
||||
announcements = []
|
||||
for (index, (_, canary, ann, when)) in self._announcements.items():
|
||||
for (index, (_, canary, ann, when)) in list(self._announcements.items()):
|
||||
ad = AnnouncementDescriptor(when, index, canary, ann)
|
||||
announcements.append(ad)
|
||||
return announcements
|
||||
@ -187,8 +202,8 @@ class IntroducerService(service.MultiService, Referenceable):
|
||||
def get_subscribers(self):
|
||||
"""Return a list of SubscriberDescriptor objects for all subscribers"""
|
||||
s = []
|
||||
for service_name, subscriptions in self._subscribers.items():
|
||||
for rref,(subscriber_info,when) in subscriptions.items():
|
||||
for service_name, subscriptions in list(self._subscribers.items()):
|
||||
for rref,(subscriber_info,when) in list(subscriptions.items()):
|
||||
# note that if the subscriber didn't do Tub.setLocation,
|
||||
# tubid will be None. Also, subscribers do not tell us which
|
||||
# pubkey they use; only publishers do that.
|
||||
@ -279,6 +294,10 @@ class IntroducerService(service.MultiService, Referenceable):
|
||||
def remote_subscribe_v2(self, subscriber, service_name, subscriber_info):
|
||||
self.log("introducer: subscription[%s] request at %s"
|
||||
% (service_name, subscriber), umid="U3uzLg")
|
||||
service_name = ensure_text(service_name)
|
||||
subscriber_info = dictutil.UnicodeKeyDict({
|
||||
ensure_text(k): v for (k, v) in subscriber_info.items()
|
||||
})
|
||||
return self.add_subscriber(subscriber, service_name, subscriber_info)
|
||||
|
||||
def add_subscriber(self, subscriber, service_name, subscriber_info):
|
||||
@ -301,6 +320,10 @@ class IntroducerService(service.MultiService, Referenceable):
|
||||
subscribers.pop(subscriber, None)
|
||||
subscriber.notifyOnDisconnect(_remove)
|
||||
|
||||
# Make sure types are correct:
|
||||
for k in self._announcements:
|
||||
assert isinstance(k[0], type(service_name))
|
||||
|
||||
# now tell them about any announcements they're interested in
|
||||
announcements = set( [ ann_t
|
||||
for idx,(ann_t,canary,ann,when)
|
||||
|
@ -914,7 +914,7 @@ class Publish(object):
|
||||
|
||||
def log_goal(self, goal, message=""):
|
||||
logmsg = [message]
|
||||
for (shnum, server) in sorted([(s,p) for (p,s) in goal]):
|
||||
for (shnum, server) in sorted([(s,p) for (p,s) in goal], key=lambda t: (id(t[0]), id(t[1]))):
|
||||
logmsg.append("sh%d to [%s]" % (shnum, server.get_name()))
|
||||
self.log("current goal: %s" % (", ".join(logmsg)), level=log.NOISY)
|
||||
self.log("we are planning to push new seqnum=#%d" % self._new_seqnum,
|
||||
|
@ -20,10 +20,17 @@ import re
|
||||
import types
|
||||
import errno
|
||||
from base64 import b32decode, b32encode
|
||||
from errno import ENOENT, EPERM
|
||||
from warnings import warn
|
||||
|
||||
import attr
|
||||
|
||||
# On Python 2 this will be the backported package.
|
||||
import configparser
|
||||
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.python import log as twlog
|
||||
from twisted.application import service
|
||||
from twisted.python.failure import Failure
|
||||
@ -36,6 +43,9 @@ from allmydata.util import fileutil, iputil
|
||||
from allmydata.util.fileutil import abspath_expanduser_unicode
|
||||
from allmydata.util.encodingutil import get_filesystem_encoding, quote_output
|
||||
from allmydata.util import configutil
|
||||
from allmydata.util.yamlutil import (
|
||||
safe_load,
|
||||
)
|
||||
|
||||
from . import (
|
||||
__full_version__,
|
||||
@ -190,25 +200,27 @@ def read_config(basedir, portnumfile, generated_files=[], _valid_config=None):
|
||||
# canonicalize the portnum file
|
||||
portnumfile = os.path.join(basedir, portnumfile)
|
||||
|
||||
# (try to) read the main config file
|
||||
config_fname = os.path.join(basedir, "tahoe.cfg")
|
||||
config_path = FilePath(basedir).child("tahoe.cfg")
|
||||
try:
|
||||
parser = configutil.get_config(config_fname)
|
||||
config_str = config_path.getContent()
|
||||
except EnvironmentError as e:
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
# The file is missing, just create empty ConfigParser.
|
||||
parser = configutil.get_config_from_string(u"")
|
||||
config_str = u""
|
||||
else:
|
||||
config_str = config_str.decode("utf-8-sig")
|
||||
|
||||
configutil.validate_config(config_fname, parser, _valid_config)
|
||||
|
||||
# make sure we have a private configuration area
|
||||
fileutil.make_dirs(os.path.join(basedir, "private"), 0o700)
|
||||
|
||||
return _Config(parser, portnumfile, basedir, config_fname)
|
||||
return config_from_string(
|
||||
basedir,
|
||||
portnumfile,
|
||||
config_str,
|
||||
_valid_config,
|
||||
config_path,
|
||||
)
|
||||
|
||||
|
||||
def config_from_string(basedir, portnumfile, config_str, _valid_config=None):
|
||||
def config_from_string(basedir, portnumfile, config_str, _valid_config=None, fpath=None):
|
||||
"""
|
||||
load and validate configuration from in-memory string
|
||||
"""
|
||||
@ -221,9 +233,19 @@ def config_from_string(basedir, portnumfile, config_str, _valid_config=None):
|
||||
# load configuration from in-memory string
|
||||
parser = configutil.get_config_from_string(config_str)
|
||||
|
||||
fname = "<in-memory>"
|
||||
configutil.validate_config(fname, parser, _valid_config)
|
||||
return _Config(parser, portnumfile, basedir, fname)
|
||||
configutil.validate_config(
|
||||
"<string>" if fpath is None else fpath.path,
|
||||
parser,
|
||||
_valid_config,
|
||||
)
|
||||
|
||||
return _Config(
|
||||
parser,
|
||||
portnumfile,
|
||||
basedir,
|
||||
fpath,
|
||||
_valid_config,
|
||||
)
|
||||
|
||||
|
||||
def _error_about_old_config_files(basedir, generated_files):
|
||||
@ -251,6 +273,7 @@ def _error_about_old_config_files(basedir, generated_files):
|
||||
raise e
|
||||
|
||||
|
||||
@attr.s
|
||||
class _Config(object):
|
||||
"""
|
||||
Manages configuration of a Tahoe 'node directory'.
|
||||
@ -259,30 +282,47 @@ class _Config(object):
|
||||
class; names and funtionality have been kept the same while moving
|
||||
the code. It probably makes sense for several of these APIs to
|
||||
have better names.
|
||||
|
||||
:ivar ConfigParser config: The actual configuration values.
|
||||
|
||||
:ivar str portnum_fname: filename to use for the port-number file (a
|
||||
relative path inside basedir).
|
||||
|
||||
:ivar str _basedir: path to our "node directory", inside which all
|
||||
configuration is managed.
|
||||
|
||||
:ivar (FilePath|NoneType) config_path: The path actually used to create
|
||||
the configparser (might be ``None`` if using in-memory data).
|
||||
|
||||
:ivar ValidConfiguration valid_config_sections: The validator for the
|
||||
values in this configuration.
|
||||
"""
|
||||
config = attr.ib(validator=attr.validators.instance_of(configparser.ConfigParser))
|
||||
portnum_fname = attr.ib()
|
||||
_basedir = attr.ib(
|
||||
converter=lambda basedir: abspath_expanduser_unicode(ensure_text(basedir)),
|
||||
)
|
||||
config_path = attr.ib(
|
||||
validator=attr.validators.optional(
|
||||
attr.validators.instance_of(FilePath),
|
||||
),
|
||||
)
|
||||
valid_config_sections = attr.ib(
|
||||
default=configutil.ValidConfiguration.everything(),
|
||||
validator=attr.validators.instance_of(configutil.ValidConfiguration),
|
||||
)
|
||||
|
||||
def __init__(self, configparser, portnum_fname, basedir, config_fname):
|
||||
"""
|
||||
:param configparser: a ConfigParser instance
|
||||
@property
|
||||
def nickname(self):
|
||||
nickname = self.get_config("node", "nickname", u"<unspecified>")
|
||||
assert isinstance(nickname, str)
|
||||
return nickname
|
||||
|
||||
:param portnum_fname: filename to use for the port-number file
|
||||
(a relative path inside basedir)
|
||||
|
||||
:param basedir: path to our "node directory", inside which all
|
||||
configuration is managed
|
||||
|
||||
:param config_fname: the pathname actually used to create the
|
||||
configparser (might be 'fake' if using in-memory data)
|
||||
"""
|
||||
self.portnum_fname = portnum_fname
|
||||
self._basedir = abspath_expanduser_unicode(ensure_text(basedir))
|
||||
self._config_fname = config_fname
|
||||
self.config = configparser
|
||||
self.nickname = self.get_config("node", "nickname", u"<unspecified>")
|
||||
assert isinstance(self.nickname, str)
|
||||
|
||||
def validate(self, valid_config_sections):
|
||||
configutil.validate_config(self._config_fname, self.config, valid_config_sections)
|
||||
@property
|
||||
def _config_fname(self):
|
||||
if self.config_path is None:
|
||||
return "<string>"
|
||||
return self.config_path.path
|
||||
|
||||
def write_config_file(self, name, value, mode="w"):
|
||||
"""
|
||||
@ -327,6 +367,34 @@ class _Config(object):
|
||||
)
|
||||
return default
|
||||
|
||||
def set_config(self, section, option, value):
|
||||
"""
|
||||
Set a config option in a section and re-write the tahoe.cfg file
|
||||
|
||||
:param str section: The name of the section in which to set the
|
||||
option.
|
||||
|
||||
:param str option: The name of the option to set.
|
||||
|
||||
:param str value: The value of the option.
|
||||
|
||||
:raise UnescapedHashError: If the option holds a fURL and there is a
|
||||
``#`` in the value.
|
||||
"""
|
||||
if option.endswith(".furl") and "#" in value:
|
||||
raise UnescapedHashError(section, option, value)
|
||||
|
||||
copied_config = configutil.copy_config(self.config)
|
||||
configutil.set_config(copied_config, section, option, value)
|
||||
configutil.validate_config(
|
||||
self._config_fname,
|
||||
copied_config,
|
||||
self.valid_config_sections,
|
||||
)
|
||||
if self.config_path is not None:
|
||||
configutil.write_config(self.config_path, copied_config)
|
||||
self.config = copied_config
|
||||
|
||||
def get_config_from_file(self, name, required=False):
|
||||
"""Get the (string) contents of a config file, or None if the file
|
||||
did not exist. If required=True, raise an exception rather than
|
||||
@ -419,6 +487,97 @@ class _Config(object):
|
||||
os.path.join(self._basedir, *args)
|
||||
)
|
||||
|
||||
def get_introducer_configuration(self):
|
||||
"""
|
||||
Get configuration for introducers.
|
||||
|
||||
:return {unicode: (unicode, FilePath)}: A mapping from introducer
|
||||
petname to a tuple of the introducer's fURL and local cache path.
|
||||
"""
|
||||
introducers_yaml_filename = self.get_private_path("introducers.yaml")
|
||||
introducers_filepath = FilePath(introducers_yaml_filename)
|
||||
|
||||
def get_cache_filepath(petname):
|
||||
return FilePath(
|
||||
self.get_private_path("introducer_{}_cache.yaml".format(petname)),
|
||||
)
|
||||
|
||||
try:
|
||||
with introducers_filepath.open() as f:
|
||||
introducers_yaml = safe_load(f)
|
||||
if introducers_yaml is None:
|
||||
raise EnvironmentError(
|
||||
EPERM,
|
||||
"Can't read '{}'".format(introducers_yaml_filename),
|
||||
introducers_yaml_filename,
|
||||
)
|
||||
introducers = {
|
||||
petname: config["furl"]
|
||||
for petname, config
|
||||
in introducers_yaml.get("introducers", {}).items()
|
||||
}
|
||||
non_strs = list(
|
||||
k
|
||||
for k
|
||||
in introducers.keys()
|
||||
if not isinstance(k, str)
|
||||
)
|
||||
if non_strs:
|
||||
raise TypeError(
|
||||
"Introducer petnames {!r} should have been str".format(
|
||||
non_strs,
|
||||
),
|
||||
)
|
||||
non_strs = list(
|
||||
v
|
||||
for v
|
||||
in introducers.values()
|
||||
if not isinstance(v, str)
|
||||
)
|
||||
if non_strs:
|
||||
raise TypeError(
|
||||
"Introducer fURLs {!r} should have been str".format(
|
||||
non_strs,
|
||||
),
|
||||
)
|
||||
log.msg(
|
||||
"found {} introducers in {!r}".format(
|
||||
len(introducers),
|
||||
introducers_yaml_filename,
|
||||
)
|
||||
)
|
||||
except EnvironmentError as e:
|
||||
if e.errno != ENOENT:
|
||||
raise
|
||||
introducers = {}
|
||||
|
||||
# supported the deprecated [client]introducer.furl item in tahoe.cfg
|
||||
tahoe_cfg_introducer_furl = self.get_config("client", "introducer.furl", None)
|
||||
if tahoe_cfg_introducer_furl == "None":
|
||||
raise ValueError(
|
||||
"tahoe.cfg has invalid 'introducer.furl = None':"
|
||||
" to disable it omit the key entirely"
|
||||
)
|
||||
if tahoe_cfg_introducer_furl:
|
||||
warn(
|
||||
"tahoe.cfg [client]introducer.furl is deprecated; "
|
||||
"use private/introducers.yaml instead.",
|
||||
category=DeprecationWarning,
|
||||
stacklevel=-1,
|
||||
)
|
||||
if "default" in introducers:
|
||||
raise ValueError(
|
||||
"'default' introducer furl cannot be specified in tahoe.cfg and introducers.yaml;"
|
||||
" please fix impossible configuration."
|
||||
)
|
||||
introducers['default'] = tahoe_cfg_introducer_furl
|
||||
|
||||
return {
|
||||
petname: (furl, get_cache_filepath(petname))
|
||||
for (petname, furl)
|
||||
in introducers.items()
|
||||
}
|
||||
|
||||
|
||||
def create_tub_options(config):
|
||||
"""
|
||||
@ -457,28 +616,20 @@ def _make_tcp_handler():
|
||||
return default()
|
||||
|
||||
|
||||
def create_connection_handlers(reactor, config, i2p_provider, tor_provider):
|
||||
def create_default_connection_handlers(config, handlers):
|
||||
"""
|
||||
:returns: 2-tuple of default_connection_handlers, foolscap_connection_handlers
|
||||
:return: A dictionary giving the default connection handlers. The keys
|
||||
are strings like "tcp" and the values are strings like "tor" or
|
||||
``None``.
|
||||
"""
|
||||
reveal_ip = config.get_config("node", "reveal-IP-address", True, boolean=True)
|
||||
|
||||
# We store handlers for everything. None means we were unable to
|
||||
# create that handler, so hints which want it will be ignored.
|
||||
handlers = foolscap_connection_handlers = {
|
||||
"tcp": _make_tcp_handler(),
|
||||
"tor": tor_provider.get_tor_handler(),
|
||||
"i2p": i2p_provider.get_i2p_handler(),
|
||||
}
|
||||
log.msg(
|
||||
format="built Foolscap connection handlers for: %(known_handlers)s",
|
||||
known_handlers=sorted([k for k,v in handlers.items() if v]),
|
||||
facility="tahoe.node",
|
||||
umid="PuLh8g",
|
||||
)
|
||||
|
||||
# then we remember the default mappings from tahoe.cfg
|
||||
default_connection_handlers = {"tor": "tor", "i2p": "i2p"}
|
||||
# Remember the default mappings from tahoe.cfg
|
||||
default_connection_handlers = {
|
||||
name: name
|
||||
for name
|
||||
in handlers
|
||||
}
|
||||
tcp_handler_name = config.get_config("connections", "tcp", "tcp").lower()
|
||||
if tcp_handler_name == "disabled":
|
||||
default_connection_handlers["tcp"] = None
|
||||
@ -503,10 +654,35 @@ def create_connection_handlers(reactor, config, i2p_provider, tor_provider):
|
||||
|
||||
if not reveal_ip:
|
||||
if default_connection_handlers.get("tcp") == "tcp":
|
||||
raise PrivacyError("tcp = tcp, must be set to 'tor' or 'disabled'")
|
||||
return default_connection_handlers, foolscap_connection_handlers
|
||||
raise PrivacyError(
|
||||
"Privacy requested with `reveal-IP-address = false` "
|
||||
"but `tcp = tcp` conflicts with this.",
|
||||
)
|
||||
return default_connection_handlers
|
||||
|
||||
|
||||
def create_connection_handlers(config, i2p_provider, tor_provider):
|
||||
"""
|
||||
:returns: 2-tuple of default_connection_handlers, foolscap_connection_handlers
|
||||
"""
|
||||
# We store handlers for everything. None means we were unable to
|
||||
# create that handler, so hints which want it will be ignored.
|
||||
handlers = {
|
||||
"tcp": _make_tcp_handler(),
|
||||
"tor": tor_provider.get_tor_handler(),
|
||||
"i2p": i2p_provider.get_i2p_handler(),
|
||||
}
|
||||
log.msg(
|
||||
format="built Foolscap connection handlers for: %(known_handlers)s",
|
||||
known_handlers=sorted([k for k,v in handlers.items() if v]),
|
||||
facility="tahoe.node",
|
||||
umid="PuLh8g",
|
||||
)
|
||||
return create_default_connection_handlers(
|
||||
config,
|
||||
handlers,
|
||||
), handlers
|
||||
|
||||
|
||||
def create_tub(tub_options, default_connection_handlers, foolscap_connection_handlers,
|
||||
handler_overrides={}, **kwargs):
|
||||
@ -546,8 +722,21 @@ def _convert_tub_port(s):
|
||||
return us
|
||||
|
||||
|
||||
def _tub_portlocation(config):
|
||||
class PortAssignmentRequired(Exception):
|
||||
"""
|
||||
A Tub port number was configured to be 0 where this is not allowed.
|
||||
"""
|
||||
|
||||
|
||||
def _tub_portlocation(config, get_local_addresses_sync, allocate_tcp_port):
|
||||
"""
|
||||
Figure out the network location of the main tub for some configuration.
|
||||
|
||||
:param get_local_addresses_sync: A function like
|
||||
``iputil.get_local_addresses_sync``.
|
||||
|
||||
:param allocate_tcp_port: A function like ``iputil.allocate_tcp_port``.
|
||||
|
||||
:returns: None or tuple of (port, location) for the main tub based
|
||||
on the given configuration. May raise ValueError or PrivacyError
|
||||
if there are problems with the config
|
||||
@ -587,7 +776,7 @@ def _tub_portlocation(config):
|
||||
file_tubport = fileutil.read(config.portnum_fname).strip()
|
||||
tubport = _convert_tub_port(file_tubport)
|
||||
else:
|
||||
tubport = "tcp:%d" % iputil.allocate_tcp_port()
|
||||
tubport = "tcp:%d" % (allocate_tcp_port(),)
|
||||
fileutil.write_atomically(config.portnum_fname, tubport + "\n",
|
||||
mode="")
|
||||
else:
|
||||
@ -595,7 +784,7 @@ def _tub_portlocation(config):
|
||||
|
||||
for port in tubport.split(","):
|
||||
if port in ("0", "tcp:0"):
|
||||
raise ValueError("tub.port cannot be 0: you must choose")
|
||||
raise PortAssignmentRequired()
|
||||
|
||||
if cfg_location is None:
|
||||
cfg_location = "AUTO"
|
||||
@ -607,7 +796,7 @@ def _tub_portlocation(config):
|
||||
if "AUTO" in split_location:
|
||||
if not reveal_ip:
|
||||
raise PrivacyError("tub.location uses AUTO")
|
||||
local_addresses = iputil.get_local_addresses_sync()
|
||||
local_addresses = get_local_addresses_sync()
|
||||
# tubport must be like "tcp:12345" or "tcp:12345:morestuff"
|
||||
local_portnum = int(tubport.split(":")[1])
|
||||
new_locations = []
|
||||
@ -638,6 +827,33 @@ def _tub_portlocation(config):
|
||||
return tubport, location
|
||||
|
||||
|
||||
def tub_listen_on(i2p_provider, tor_provider, tub, tubport, location):
|
||||
"""
|
||||
Assign a Tub its listener locations.
|
||||
|
||||
:param i2p_provider: See ``allmydata.util.i2p_provider.create``.
|
||||
:param tor_provider: See ``allmydata.util.tor_provider.create``.
|
||||
"""
|
||||
for port in tubport.split(","):
|
||||
if port == "listen:i2p":
|
||||
# the I2P provider will read its section of tahoe.cfg and
|
||||
# return either a fully-formed Endpoint, or a descriptor
|
||||
# that will create one, so we don't have to stuff all the
|
||||
# options into the tub.port string (which would need a lot
|
||||
# of escaping)
|
||||
port_or_endpoint = i2p_provider.get_listener()
|
||||
elif port == "listen:tor":
|
||||
port_or_endpoint = tor_provider.get_listener()
|
||||
else:
|
||||
port_or_endpoint = port
|
||||
# Foolscap requires native strings:
|
||||
if isinstance(port_or_endpoint, (bytes, str)):
|
||||
port_or_endpoint = ensure_str(port_or_endpoint)
|
||||
tub.listenOn(port_or_endpoint)
|
||||
# This last step makes the Tub is ready for tub.registerReference()
|
||||
tub.setLocation(location)
|
||||
|
||||
|
||||
def create_main_tub(config, tub_options,
|
||||
default_connection_handlers, foolscap_connection_handlers,
|
||||
i2p_provider, tor_provider,
|
||||
@ -662,36 +878,34 @@ def create_main_tub(config, tub_options,
|
||||
:param tor_provider: None, or a _Provider instance if txtorcon +
|
||||
Tor are installed.
|
||||
"""
|
||||
portlocation = _tub_portlocation(config)
|
||||
portlocation = _tub_portlocation(
|
||||
config,
|
||||
iputil.get_local_addresses_sync,
|
||||
iputil.allocate_tcp_port,
|
||||
)
|
||||
|
||||
certfile = config.get_private_path("node.pem") # FIXME? "node.pem" was the CERTFILE option/thing
|
||||
tub = create_tub(tub_options, default_connection_handlers, foolscap_connection_handlers,
|
||||
handler_overrides=handler_overrides, certFile=certfile)
|
||||
# FIXME? "node.pem" was the CERTFILE option/thing
|
||||
certfile = config.get_private_path("node.pem")
|
||||
|
||||
if portlocation:
|
||||
tubport, location = portlocation
|
||||
for port in tubport.split(","):
|
||||
if port == "listen:i2p":
|
||||
# the I2P provider will read its section of tahoe.cfg and
|
||||
# return either a fully-formed Endpoint, or a descriptor
|
||||
# that will create one, so we don't have to stuff all the
|
||||
# options into the tub.port string (which would need a lot
|
||||
# of escaping)
|
||||
port_or_endpoint = i2p_provider.get_listener()
|
||||
elif port == "listen:tor":
|
||||
port_or_endpoint = tor_provider.get_listener()
|
||||
else:
|
||||
port_or_endpoint = port
|
||||
# Foolscap requires native strings:
|
||||
if isinstance(port_or_endpoint, (bytes, str)):
|
||||
port_or_endpoint = ensure_str(port_or_endpoint)
|
||||
tub.listenOn(port_or_endpoint)
|
||||
tub.setLocation(location)
|
||||
log.msg("Tub location set to %s" % (location,))
|
||||
# the Tub is now ready for tub.registerReference()
|
||||
else:
|
||||
tub = create_tub(
|
||||
tub_options,
|
||||
default_connection_handlers,
|
||||
foolscap_connection_handlers,
|
||||
handler_overrides=handler_overrides,
|
||||
certFile=certfile,
|
||||
)
|
||||
if portlocation is None:
|
||||
log.msg("Tub is not listening")
|
||||
|
||||
else:
|
||||
tubport, location = portlocation
|
||||
tub_listen_on(
|
||||
i2p_provider,
|
||||
tor_provider,
|
||||
tub,
|
||||
tubport,
|
||||
location,
|
||||
)
|
||||
log.msg("Tub location set to %s" % (location,))
|
||||
return tub
|
||||
|
||||
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import weakref
|
||||
from zope.interface import implementer
|
||||
from allmydata.util.assertutil import precondition
|
||||
@ -66,9 +78,9 @@ class NodeMaker(object):
|
||||
memokey = b"I" + bigcap
|
||||
else:
|
||||
memokey = b"M" + bigcap
|
||||
if memokey in self._node_cache:
|
||||
try:
|
||||
node = self._node_cache[memokey]
|
||||
else:
|
||||
except KeyError:
|
||||
cap = uri.from_string(bigcap, deep_immutable=deep_immutable,
|
||||
name=name)
|
||||
node = self._create_from_single_cap(cap)
|
||||
@ -126,7 +138,7 @@ class NodeMaker(object):
|
||||
|
||||
def create_new_mutable_directory(self, initial_children={}, version=None):
|
||||
# initial_children must have metadata (i.e. {} instead of None)
|
||||
for (name, (node, metadata)) in initial_children.iteritems():
|
||||
for (name, (node, metadata)) in initial_children.items():
|
||||
precondition(isinstance(metadata, dict),
|
||||
"create_new_mutable_directory requires metadata to be a dict, not None", metadata)
|
||||
node.raise_error()
|
||||
|
@ -10,14 +10,15 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
from yaml import (
|
||||
safe_dump,
|
||||
)
|
||||
|
||||
# Python 2 compatibility
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
|
||||
# On Python 2 this will be the backported package:
|
||||
from configparser import NoSectionError
|
||||
|
||||
from twisted.python import usage
|
||||
|
||||
from allmydata.util.assertutil import precondition
|
||||
@ -42,7 +43,7 @@ class BaseOptions(usage.Options):
|
||||
super(BaseOptions, self).__init__()
|
||||
self.command_name = os.path.basename(sys.argv[0])
|
||||
|
||||
# Only allow "tahoe --version", not e.g. "tahoe start --version"
|
||||
# Only allow "tahoe --version", not e.g. "tahoe <cmd> --version"
|
||||
def opt_version(self):
|
||||
raise usage.UsageError("--version not allowed on subcommands")
|
||||
|
||||
@ -121,24 +122,42 @@ class NoDefaultBasedirOptions(BasedirOptions):
|
||||
DEFAULT_ALIAS = u"tahoe"
|
||||
|
||||
|
||||
def write_introducer(basedir, petname, furl):
|
||||
"""
|
||||
Overwrite the node's ``introducers.yaml`` with a file containing the given
|
||||
introducer information.
|
||||
"""
|
||||
if isinstance(furl, bytes):
|
||||
furl = furl.decode("utf-8")
|
||||
basedir.child(b"private").child(b"introducers.yaml").setContent(
|
||||
safe_dump({
|
||||
"introducers": {
|
||||
petname: {
|
||||
"furl": furl,
|
||||
},
|
||||
},
|
||||
}).encode("ascii"),
|
||||
)
|
||||
|
||||
|
||||
def get_introducer_furl(nodedir, config):
|
||||
"""
|
||||
:return: the introducer FURL for the given node (no matter if it's
|
||||
a client-type node or an introducer itself)
|
||||
"""
|
||||
for petname, (furl, cache) in config.get_introducer_configuration().items():
|
||||
return furl
|
||||
|
||||
# We have no configured introducers. Maybe this is running *on* the
|
||||
# introducer? Let's guess, sure why not.
|
||||
try:
|
||||
introducer_furl = config.get('client', 'introducer.furl')
|
||||
except NoSectionError:
|
||||
# we're not a client; maybe this is running *on* the introducer?
|
||||
try:
|
||||
with open(join(nodedir, "private", "introducer.furl"), "r") as f:
|
||||
introducer_furl = f.read().strip()
|
||||
except IOError:
|
||||
raise Exception(
|
||||
"Can't find introducer FURL in tahoe.cfg nor "
|
||||
"{}/private/introducer.furl".format(nodedir)
|
||||
)
|
||||
return introducer_furl
|
||||
with open(join(nodedir, "private", "introducer.furl"), "r") as f:
|
||||
return f.read().strip()
|
||||
except IOError:
|
||||
raise Exception(
|
||||
"Can't find introducer FURL in tahoe.cfg nor "
|
||||
"{}/private/introducer.furl".format(nodedir)
|
||||
)
|
||||
|
||||
|
||||
def get_aliases(nodedir):
|
||||
|
@ -10,11 +10,20 @@ except ImportError:
|
||||
|
||||
from twisted.internet import reactor, defer
|
||||
from twisted.python.usage import UsageError
|
||||
from allmydata.scripts.common import BasedirOptions, NoDefaultBasedirOptions
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
|
||||
from allmydata.scripts.common import (
|
||||
BasedirOptions,
|
||||
NoDefaultBasedirOptions,
|
||||
write_introducer,
|
||||
)
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.util.encodingutil import listdir_unicode, argv_to_unicode, quote_local_unicode_path, get_io_encoding
|
||||
from allmydata.util import fileutil, i2p_provider, iputil, tor_provider
|
||||
|
||||
from wormhole import wormhole
|
||||
|
||||
|
||||
@ -304,14 +313,16 @@ def write_node_config(c, config):
|
||||
|
||||
|
||||
def write_client_config(c, config):
|
||||
# note, config can be a plain dict, it seems -- see
|
||||
# test_configutil.py in test_create_client_config
|
||||
introducer = config.get("introducer", None)
|
||||
if introducer is not None:
|
||||
write_introducer(
|
||||
FilePath(config["basedir"]),
|
||||
"default",
|
||||
introducer,
|
||||
)
|
||||
|
||||
c.write("[client]\n")
|
||||
c.write("# Which services should this client connect to?\n")
|
||||
introducer = config.get("introducer", None) or ""
|
||||
c.write("introducer.furl = %s\n" % introducer)
|
||||
c.write("helper.furl =\n")
|
||||
c.write("#stats_gatherer.furl =\n")
|
||||
c.write("\n")
|
||||
c.write("# Encoding parameters this client will use for newly-uploaded files\n")
|
||||
c.write("# This can be changed at any time: the encoding is saved in\n")
|
||||
@ -442,8 +453,11 @@ def create_node(config):
|
||||
|
||||
print("Node created in %s" % quote_local_unicode_path(basedir), file=out)
|
||||
tahoe_cfg = quote_local_unicode_path(os.path.join(basedir, "tahoe.cfg"))
|
||||
introducers_yaml = quote_local_unicode_path(
|
||||
os.path.join(basedir, "private", "introducers.yaml"),
|
||||
)
|
||||
if not config.get("introducer", ""):
|
||||
print(" Please set [client]introducer.furl= in %s!" % tahoe_cfg, file=out)
|
||||
print(" Please add introducers to %s!" % (introducers_yaml,), file=out)
|
||||
print(" The node cannot connect to a grid without it.", file=out)
|
||||
if not config.get("nickname", ""):
|
||||
print(" Please set [node]nickname= in %s" % tahoe_cfg, file=out)
|
||||
|
@ -1,263 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os, sys
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from twisted.scripts import twistd
|
||||
from twisted.python import usage
|
||||
from twisted.python.reflect import namedAny
|
||||
from twisted.internet.defer import maybeDeferred, fail
|
||||
from twisted.application.service import Service
|
||||
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util import fileutil
|
||||
from allmydata.node import read_config
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
|
||||
from allmydata.util.configutil import UnknownConfigError
|
||||
from allmydata.util.deferredutil import HookMixin
|
||||
|
||||
|
||||
def get_pidfile(basedir):
|
||||
"""
|
||||
Returns the path to the PID file.
|
||||
:param basedir: the node's base directory
|
||||
:returns: the path to the PID file
|
||||
"""
|
||||
return os.path.join(basedir, u"twistd.pid")
|
||||
|
||||
def get_pid_from_pidfile(pidfile):
|
||||
"""
|
||||
Tries to read and return the PID stored in the node's PID file
|
||||
(twistd.pid).
|
||||
:param pidfile: try to read this PID file
|
||||
:returns: A numeric PID on success, ``None`` if PID file absent or
|
||||
inaccessible, ``-1`` if PID file invalid.
|
||||
"""
|
||||
try:
|
||||
with open(pidfile, "r") as f:
|
||||
pid = f.read()
|
||||
except EnvironmentError:
|
||||
return None
|
||||
|
||||
try:
|
||||
pid = int(pid)
|
||||
except ValueError:
|
||||
return -1
|
||||
|
||||
return pid
|
||||
|
||||
def identify_node_type(basedir):
|
||||
"""
|
||||
:return unicode: None or one of: 'client', 'introducer',
|
||||
'key-generator' or 'stats-gatherer'
|
||||
"""
|
||||
tac = u''
|
||||
try:
|
||||
for fn in listdir_unicode(basedir):
|
||||
if fn.endswith(u".tac"):
|
||||
tac = fn
|
||||
break
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
for t in (u"client", u"introducer", u"key-generator", u"stats-gatherer"):
|
||||
if t in tac:
|
||||
return t
|
||||
return None
|
||||
|
||||
|
||||
class RunOptions(BasedirOptions):
|
||||
optParameters = [
|
||||
("basedir", "C", None,
|
||||
"Specify which Tahoe base directory should be used."
|
||||
" This has the same effect as the global --node-directory option."
|
||||
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
|
||||
]
|
||||
|
||||
def parseArgs(self, basedir=None, *twistd_args):
|
||||
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
|
||||
# looks like an option to the tahoe subcommand, not to twistd. So you
|
||||
# can either use 'tahoe start' or 'tahoe start NODEDIR
|
||||
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
|
||||
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
|
||||
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
self.twistd_args = twistd_args
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] %s [options]"
|
||||
" [NODEDIR [twistd-options]]"
|
||||
% (self.command_name, self.subcommand_name))
|
||||
|
||||
def getUsage(self, width=None):
|
||||
t = BasedirOptions.getUsage(self, width) + "\n"
|
||||
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
|
||||
t += twistd_options.replace("Options:", "twistd-options:", 1)
|
||||
t += """
|
||||
|
||||
Note that if any twistd-options are used, NODEDIR must be specified explicitly
|
||||
(not by default or using -C/--basedir or -d/--node-directory), and followed by
|
||||
the twistd-options.
|
||||
"""
|
||||
return t
|
||||
|
||||
|
||||
class MyTwistdConfig(twistd.ServerOptions):
|
||||
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
|
||||
|
||||
stderr = sys.stderr
|
||||
|
||||
|
||||
class DaemonizeTheRealService(Service, HookMixin):
|
||||
"""
|
||||
this HookMixin should really be a helper; our hooks:
|
||||
|
||||
- 'running': triggered when startup has completed; it triggers
|
||||
with None of successful or a Failure otherwise.
|
||||
"""
|
||||
stderr = sys.stderr
|
||||
|
||||
def __init__(self, nodetype, basedir, options):
|
||||
super(DaemonizeTheRealService, self).__init__()
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
# setup for HookMixin
|
||||
self._hooks = {
|
||||
"running": None,
|
||||
}
|
||||
self.stderr = options.parent.stderr
|
||||
|
||||
def startService(self):
|
||||
|
||||
def key_generator_removed():
|
||||
return fail(ValueError("key-generator support removed, see #2783"))
|
||||
|
||||
def start():
|
||||
node_to_instance = {
|
||||
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
|
||||
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
|
||||
u"stats-gatherer": lambda: maybeDeferred(namedAny("allmydata.stats.StatsGathererService"), read_config(self.basedir, None), self.basedir, verbose=True),
|
||||
u"key-generator": key_generator_removed,
|
||||
}
|
||||
|
||||
try:
|
||||
service_factory = node_to_instance[self.nodetype]
|
||||
except KeyError:
|
||||
raise ValueError("unknown nodetype %s" % self.nodetype)
|
||||
|
||||
def handle_config_error(fail):
|
||||
if fail.check(UnknownConfigError):
|
||||
self.stderr.write("\nConfiguration error:\n{}\n\n".format(fail.value))
|
||||
else:
|
||||
self.stderr.write("\nUnknown error\n")
|
||||
fail.printTraceback(self.stderr)
|
||||
reactor.stop()
|
||||
|
||||
d = service_factory()
|
||||
|
||||
def created(srv):
|
||||
srv.setServiceParent(self.parent)
|
||||
d.addCallback(created)
|
||||
d.addErrback(handle_config_error)
|
||||
d.addBoth(self._call_hook, 'running')
|
||||
return d
|
||||
|
||||
from twisted.internet import reactor
|
||||
reactor.callWhenRunning(start)
|
||||
|
||||
|
||||
class DaemonizeTahoeNodePlugin(object):
|
||||
tapname = "tahoenode"
|
||||
def __init__(self, nodetype, basedir):
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
|
||||
def makeService(self, so):
|
||||
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
|
||||
|
||||
|
||||
def run(config):
|
||||
"""
|
||||
Runs a Tahoe-LAFS node in the foreground.
|
||||
|
||||
Sets up the IService instance corresponding to the type of node
|
||||
that's starting and uses Twisted's twistd runner to disconnect our
|
||||
process from the terminal.
|
||||
"""
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
|
||||
if not os.path.isdir(basedir):
|
||||
print("%s does not look like a directory at all" % quoted_basedir, file=err)
|
||||
return 1
|
||||
nodetype = identify_node_type(basedir)
|
||||
if not nodetype:
|
||||
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
|
||||
return 1
|
||||
# Now prepare to turn into a twistd process. This os.chdir is the point
|
||||
# of no return.
|
||||
os.chdir(basedir)
|
||||
twistd_args = []
|
||||
if (nodetype in (u"client", u"introducer")
|
||||
and "--nodaemon" not in config.twistd_args
|
||||
and "--syslog" not in config.twistd_args
|
||||
and "--logfile" not in config.twistd_args):
|
||||
fileutil.make_dirs(os.path.join(basedir, u"logs"))
|
||||
twistd_args.extend(["--logfile", os.path.join("logs", "twistd.log")])
|
||||
twistd_args.extend(config.twistd_args)
|
||||
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
|
||||
|
||||
twistd_config = MyTwistdConfig()
|
||||
twistd_config.stdout = out
|
||||
twistd_config.stderr = err
|
||||
try:
|
||||
twistd_config.parseOptions(twistd_args)
|
||||
except usage.error as ue:
|
||||
# these arguments were unsuitable for 'twistd'
|
||||
print(config, file=err)
|
||||
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
|
||||
return 1
|
||||
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
|
||||
|
||||
# handle invalid PID file (twistd might not start otherwise)
|
||||
pidfile = get_pidfile(basedir)
|
||||
if get_pid_from_pidfile(pidfile) == -1:
|
||||
print("found invalid PID file in %s - deleting it" % basedir, file=err)
|
||||
os.remove(pidfile)
|
||||
|
||||
# On Unix-like platforms:
|
||||
# Unless --nodaemon was provided, the twistd.runApp() below spawns off a
|
||||
# child process, and the parent calls os._exit(0), so there's no way for
|
||||
# us to get control afterwards, even with 'except SystemExit'. If
|
||||
# application setup fails (e.g. ImportError), runApp() will raise an
|
||||
# exception.
|
||||
#
|
||||
# So if we wanted to do anything with the running child, we'd have two
|
||||
# options:
|
||||
#
|
||||
# * fork first, and have our child wait for the runApp() child to get
|
||||
# running. (note: just fork(). This is easier than fork+exec, since we
|
||||
# don't have to get PATH and PYTHONPATH set up, since we're not
|
||||
# starting a *different* process, just cloning a new instance of the
|
||||
# current process)
|
||||
# * or have the user run a separate command some time after this one
|
||||
# exits.
|
||||
#
|
||||
# For Tahoe, we don't need to do anything with the child, so we can just
|
||||
# let it exit.
|
||||
#
|
||||
# On Windows:
|
||||
# twistd does not fork; it just runs in the current process whether or not
|
||||
# --nodaemon is specified. (As on Unix, --nodaemon does have the side effect
|
||||
# of causing us to log to stdout/stderr.)
|
||||
|
||||
if "--nodaemon" in twistd_args or sys.platform == "win32":
|
||||
verb = "running"
|
||||
else:
|
||||
verb = "starting"
|
||||
|
||||
print("%s node in %s" % (verb, quoted_basedir), file=out)
|
||||
twistd.runApp(twistd_config)
|
||||
# we should only reach here if --nodaemon or equivalent was used
|
||||
return 0
|
@ -14,8 +14,7 @@ from twisted.internet import defer, task, threads
|
||||
|
||||
from allmydata.scripts.common import get_default_nodedir
|
||||
from allmydata.scripts import debug, create_node, cli, \
|
||||
stats_gatherer, admin, tahoe_daemonize, tahoe_start, \
|
||||
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
|
||||
admin, tahoe_run, tahoe_invite
|
||||
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
|
||||
from allmydata.util.eliotutil import (
|
||||
opt_eliot_destination,
|
||||
@ -42,19 +41,11 @@ if _default_nodedir:
|
||||
|
||||
# XXX all this 'dispatch' stuff needs to be unified + fixed up
|
||||
_control_node_dispatch = {
|
||||
"daemonize": tahoe_daemonize.daemonize,
|
||||
"start": tahoe_start.start,
|
||||
"run": tahoe_run.run,
|
||||
"stop": tahoe_stop.stop,
|
||||
"restart": tahoe_restart.restart,
|
||||
}
|
||||
|
||||
process_control_commands = [
|
||||
("run", None, tahoe_run.RunOptions, "run a node without daemonizing"),
|
||||
("daemonize", None, tahoe_daemonize.DaemonizeOptions, "(deprecated) run a node in the background"),
|
||||
("start", None, tahoe_start.StartOptions, "(deprecated) start a node in the background and confirm it started"),
|
||||
("stop", None, tahoe_stop.StopOptions, "(deprecated) stop a node"),
|
||||
("restart", None, tahoe_restart.RestartOptions, "(deprecated) restart a node"),
|
||||
] # type: SubCommands
|
||||
|
||||
|
||||
@ -65,7 +56,6 @@ class Options(usage.Options):
|
||||
stderr = sys.stderr
|
||||
|
||||
subCommands = ( create_node.subCommands
|
||||
+ stats_gatherer.subCommands
|
||||
+ admin.subCommands
|
||||
+ process_control_commands
|
||||
+ debug.subCommands
|
||||
@ -112,7 +102,7 @@ class Options(usage.Options):
|
||||
|
||||
|
||||
create_dispatch = {}
|
||||
for module in (create_node, stats_gatherer):
|
||||
for module in (create_node,):
|
||||
create_dispatch.update(module.dispatch) # type: ignore
|
||||
|
||||
def parse_options(argv, config=None):
|
||||
|
@ -1,108 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
|
||||
try:
|
||||
from allmydata.scripts.types_ import SubCommands
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# Python 2 compatibility
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
|
||||
from twisted.python import usage
|
||||
|
||||
from allmydata.scripts.common import NoDefaultBasedirOptions
|
||||
from allmydata.scripts.create_node import write_tac
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_output
|
||||
from allmydata.util import fileutil, iputil
|
||||
|
||||
|
||||
class CreateStatsGathererOptions(NoDefaultBasedirOptions):
|
||||
subcommand_name = "create-stats-gatherer"
|
||||
optParameters = [
|
||||
("hostname", None, None, "Hostname of this machine, used to build location"),
|
||||
("location", None, None, "FURL connection hints, e.g. 'tcp:HOSTNAME:PORT'"),
|
||||
("port", None, None, "listening endpoint, e.g. 'tcp:PORT'"),
|
||||
]
|
||||
def postOptions(self):
|
||||
if self["hostname"] and (not self["location"]) and (not self["port"]):
|
||||
pass
|
||||
elif (not self["hostname"]) and self["location"] and self["port"]:
|
||||
pass
|
||||
else:
|
||||
raise usage.UsageError("You must provide --hostname, or --location and --port.")
|
||||
|
||||
description = """
|
||||
Create a "stats-gatherer" service, which is a standalone process that
|
||||
collects and stores runtime statistics from many server nodes. This is a
|
||||
tool for operations personnel to keep track of free disk space, server
|
||||
load, and protocol activity, across a fleet of Tahoe storage servers.
|
||||
|
||||
The "stats-gatherer" listens on a TCP port and publishes a Foolscap FURL
|
||||
by writing it into a file named "stats_gatherer.furl". You must copy this
|
||||
FURL into the servers' tahoe.cfg, as the [client] stats_gatherer.furl=
|
||||
entry. Those servers will then establish a connection to the
|
||||
stats-gatherer and publish their statistics on a periodic basis. The
|
||||
gatherer writes a summary JSON file out to disk after each update.
|
||||
|
||||
The stats-gatherer listens on a configurable port, and writes a
|
||||
configurable hostname+port pair into the FURL that it publishes. There
|
||||
are two configuration modes you can use.
|
||||
|
||||
* In the first, you provide --hostname=, and the service chooses its own
|
||||
TCP port number. If the host is named "example.org" and you provide
|
||||
--hostname=example.org, the node will pick a port number (e.g. 12345)
|
||||
and use location="tcp:example.org:12345" and port="tcp:12345".
|
||||
|
||||
* In the second, you provide both --location= and --port=, and the
|
||||
service will refrain from doing any allocation of its own. --location=
|
||||
must be a Foolscap "FURL connection hint sequence", which is a
|
||||
comma-separated list of "tcp:HOSTNAME:PORTNUM" strings. --port= must be
|
||||
a Twisted server endpoint specification, which is generally
|
||||
"tcp:PORTNUM". So, if your host is named "example.org" and you want to
|
||||
use port 6789, you should provide --location=tcp:example.org:6789 and
|
||||
--port=tcp:6789. You are responsible for making sure --location= and
|
||||
--port= match each other.
|
||||
"""
|
||||
|
||||
|
||||
def create_stats_gatherer(config):
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
# This should always be called with an absolute Unicode basedir.
|
||||
precondition(isinstance(basedir, str), basedir)
|
||||
|
||||
if os.path.exists(basedir):
|
||||
if listdir_unicode(basedir):
|
||||
print("The base directory %s is not empty." % quote_output(basedir), file=err)
|
||||
print("To avoid clobbering anything, I am going to quit now.", file=err)
|
||||
print("Please use a different directory, or empty this one.", file=err)
|
||||
return -1
|
||||
# we're willing to use an empty directory
|
||||
else:
|
||||
os.mkdir(basedir)
|
||||
write_tac(basedir, "stats-gatherer")
|
||||
if config["hostname"]:
|
||||
portnum = iputil.allocate_tcp_port()
|
||||
location = "tcp:%s:%d" % (config["hostname"], portnum)
|
||||
port = "tcp:%d" % portnum
|
||||
else:
|
||||
location = config["location"]
|
||||
port = config["port"]
|
||||
fileutil.write(os.path.join(basedir, "location"), location+"\n")
|
||||
fileutil.write(os.path.join(basedir, "port"), port+"\n")
|
||||
return 0
|
||||
|
||||
subCommands = [
|
||||
("create-stats-gatherer", None, CreateStatsGathererOptions, "Create a stats-gatherer service."),
|
||||
] # type: SubCommands
|
||||
|
||||
dispatch = {
|
||||
"create-stats-gatherer": create_stats_gatherer,
|
||||
}
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os.path
|
||||
import codecs
|
||||
@ -10,7 +11,7 @@ from allmydata import uri
|
||||
from allmydata.scripts.common_http import do_http, check_http_error
|
||||
from allmydata.scripts.common import get_aliases
|
||||
from allmydata.util.fileutil import move_into_place
|
||||
from allmydata.util.encodingutil import unicode_to_output, quote_output
|
||||
from allmydata.util.encodingutil import quote_output, quote_output_u
|
||||
|
||||
|
||||
def add_line_to_aliasfile(aliasfile, alias, cap):
|
||||
@ -48,14 +49,13 @@ def add_alias(options):
|
||||
|
||||
old_aliases = get_aliases(nodedir)
|
||||
if alias in old_aliases:
|
||||
print("Alias %s already exists!" % quote_output(alias), file=stderr)
|
||||
show_output(stderr, "Alias {alias} already exists!", alias=alias)
|
||||
return 1
|
||||
aliasfile = os.path.join(nodedir, "private", "aliases")
|
||||
cap = uri.from_string_dirnode(cap).to_string()
|
||||
|
||||
add_line_to_aliasfile(aliasfile, alias, cap)
|
||||
|
||||
print("Alias %s added" % quote_output(alias), file=stdout)
|
||||
show_output(stdout, "Alias {alias} added", alias=alias)
|
||||
return 0
|
||||
|
||||
def create_alias(options):
|
||||
@ -75,7 +75,7 @@ def create_alias(options):
|
||||
|
||||
old_aliases = get_aliases(nodedir)
|
||||
if alias in old_aliases:
|
||||
print("Alias %s already exists!" % quote_output(alias), file=stderr)
|
||||
show_output(stderr, "Alias {alias} already exists!", alias=alias)
|
||||
return 1
|
||||
|
||||
aliasfile = os.path.join(nodedir, "private", "aliases")
|
||||
@ -93,11 +93,51 @@ def create_alias(options):
|
||||
# probably check for others..
|
||||
|
||||
add_line_to_aliasfile(aliasfile, alias, new_uri)
|
||||
|
||||
print("Alias %s created" % (quote_output(alias),), file=stdout)
|
||||
show_output(stdout, "Alias {alias} created", alias=alias)
|
||||
return 0
|
||||
|
||||
|
||||
def show_output(fp, template, **kwargs):
|
||||
"""
|
||||
Print to just about anything.
|
||||
|
||||
:param fp: A file-like object to which to print. This handles the case
|
||||
where ``fp`` declares a support encoding with the ``encoding``
|
||||
attribute (eg sys.stdout on Python 3). It handles the case where
|
||||
``fp`` declares no supported encoding via ``None`` for its
|
||||
``encoding`` attribute (eg sys.stdout on Python 2 when stdout is not a
|
||||
tty). It handles the case where ``fp`` declares an encoding that does
|
||||
not support all of the characters in the output by forcing the
|
||||
"namereplace" error handler. It handles the case where there is no
|
||||
``encoding`` attribute at all (eg StringIO.StringIO) by writing
|
||||
utf-8-encoded bytes.
|
||||
"""
|
||||
assert isinstance(template, unicode)
|
||||
|
||||
# On Python 3 fp has an encoding attribute under all real usage. On
|
||||
# Python 2, the encoding attribute is None if stdio is not a tty. The
|
||||
# test suite often passes StringIO which has no such attribute. Make
|
||||
# allowances for this until the test suite is fixed and Python 2 is no
|
||||
# more.
|
||||
try:
|
||||
encoding = fp.encoding or "utf-8"
|
||||
except AttributeError:
|
||||
has_encoding = False
|
||||
encoding = "utf-8"
|
||||
else:
|
||||
has_encoding = True
|
||||
|
||||
output = template.format(**{
|
||||
k: quote_output_u(v, encoding=encoding)
|
||||
for (k, v)
|
||||
in kwargs.items()
|
||||
})
|
||||
safe_output = output.encode(encoding, "namereplace")
|
||||
if has_encoding:
|
||||
safe_output = safe_output.decode(encoding)
|
||||
print(safe_output, file=fp)
|
||||
|
||||
|
||||
def _get_alias_details(nodedir):
|
||||
aliases = get_aliases(nodedir)
|
||||
alias_names = sorted(aliases.keys())
|
||||
@ -111,34 +151,45 @@ def _get_alias_details(nodedir):
|
||||
return data
|
||||
|
||||
|
||||
def _escape_format(t):
|
||||
"""
|
||||
_escape_format(t).format() == t
|
||||
|
||||
:param unicode t: The text to escape.
|
||||
"""
|
||||
return t.replace("{", "{{").replace("}", "}}")
|
||||
|
||||
|
||||
def list_aliases(options):
|
||||
nodedir = options['node-directory']
|
||||
stdout = options.stdout
|
||||
stderr = options.stderr
|
||||
|
||||
data = _get_alias_details(nodedir)
|
||||
|
||||
max_width = max([len(quote_output(name)) for name in data.keys()] + [0])
|
||||
fmt = "%" + str(max_width) + "s: %s"
|
||||
rc = 0
|
||||
"""
|
||||
Show aliases that exist.
|
||||
"""
|
||||
data = _get_alias_details(options['node-directory'])
|
||||
|
||||
if options['json']:
|
||||
try:
|
||||
# XXX why are we presuming utf-8 output?
|
||||
print(json.dumps(data, indent=4).decode('utf-8'), file=stdout)
|
||||
except (UnicodeEncodeError, UnicodeDecodeError):
|
||||
print(json.dumps(data, indent=4), file=stderr)
|
||||
rc = 1
|
||||
output = _escape_format(json.dumps(data, indent=4).decode("ascii"))
|
||||
else:
|
||||
for name, details in data.items():
|
||||
dircap = details['readonly'] if options['readonly-uri'] else details['readwrite']
|
||||
try:
|
||||
print(fmt % (unicode_to_output(name), unicode_to_output(dircap.decode('utf-8'))), file=stdout)
|
||||
except (UnicodeEncodeError, UnicodeDecodeError):
|
||||
print(fmt % (quote_output(name), quote_output(dircap)), file=stderr)
|
||||
rc = 1
|
||||
def dircap(details):
|
||||
return (
|
||||
details['readonly']
|
||||
if options['readonly-uri']
|
||||
else details['readwrite']
|
||||
).decode("utf-8")
|
||||
|
||||
if rc == 1:
|
||||
print("\nThis listing included aliases or caps that could not be converted to the terminal" \
|
||||
"\noutput encoding. These are shown using backslash escapes and in quotes.", file=stderr)
|
||||
return rc
|
||||
def format_dircap(name, details):
|
||||
return fmt % (name, dircap(details))
|
||||
|
||||
max_width = max([len(quote_output(name)) for name in data.keys()] + [0])
|
||||
fmt = "%" + str(max_width) + "s: %s"
|
||||
output = "\n".join(list(
|
||||
format_dircap(name, details)
|
||||
for name, details
|
||||
in data.items()
|
||||
))
|
||||
|
||||
if output:
|
||||
# Show whatever we computed. Skip this if there is no output to avoid
|
||||
# a spurious blank line.
|
||||
show_output(options.stdout, output)
|
||||
|
||||
return 0
|
||||
|
@ -1,16 +0,0 @@
|
||||
from .run_common import (
|
||||
RunOptions as _RunOptions,
|
||||
run,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"DaemonizeOptions",
|
||||
"daemonize",
|
||||
]
|
||||
|
||||
class DaemonizeOptions(_RunOptions):
|
||||
subcommand_name = "daemonize"
|
||||
|
||||
def daemonize(config):
|
||||
print("'tahoe daemonize' is deprecated; see 'tahoe run'")
|
||||
return run(config)
|
@ -1,7 +1,6 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import json
|
||||
from os.path import join
|
||||
|
||||
try:
|
||||
from allmydata.scripts.types_ import SubCommands
|
||||
@ -13,9 +12,9 @@ from twisted.internet import defer, reactor
|
||||
|
||||
from wormhole import wormhole
|
||||
|
||||
from allmydata.util import configutil
|
||||
from allmydata.util.encodingutil import argv_to_abspath
|
||||
from allmydata.scripts.common import get_default_nodedir, get_introducer_furl
|
||||
from allmydata.node import read_config
|
||||
|
||||
|
||||
class InviteOptions(usage.Options):
|
||||
@ -82,7 +81,7 @@ def invite(options):
|
||||
basedir = argv_to_abspath(options.parent['node-directory'])
|
||||
else:
|
||||
basedir = get_default_nodedir()
|
||||
config = configutil.get_config(join(basedir, 'tahoe.cfg'))
|
||||
config = read_config(basedir, u"")
|
||||
out = options.stdout
|
||||
err = options.stderr
|
||||
|
||||
|
@ -1,21 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
from .tahoe_start import StartOptions, start
|
||||
from .tahoe_stop import stop, COULD_NOT_STOP
|
||||
|
||||
|
||||
class RestartOptions(StartOptions):
|
||||
subcommand_name = "restart"
|
||||
|
||||
|
||||
def restart(config):
|
||||
print("'tahoe restart' is deprecated; see 'tahoe run'")
|
||||
stderr = config.stderr
|
||||
rc = stop(config)
|
||||
if rc == COULD_NOT_STOP:
|
||||
print("ignoring couldn't-stop", file=stderr)
|
||||
rc = 0
|
||||
if rc:
|
||||
print("not restarting", file=stderr)
|
||||
return rc
|
||||
return start(config)
|
@ -1,15 +1,233 @@
|
||||
from .run_common import (
|
||||
RunOptions as _RunOptions,
|
||||
run,
|
||||
)
|
||||
from __future__ import print_function
|
||||
|
||||
__all__ = [
|
||||
"RunOptions",
|
||||
"run",
|
||||
]
|
||||
|
||||
class RunOptions(_RunOptions):
|
||||
import os, sys
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from twisted.scripts import twistd
|
||||
from twisted.python import usage
|
||||
from twisted.python.reflect import namedAny
|
||||
from twisted.internet.defer import maybeDeferred
|
||||
from twisted.application.service import Service
|
||||
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
|
||||
from allmydata.util.configutil import UnknownConfigError
|
||||
from allmydata.util.deferredutil import HookMixin
|
||||
|
||||
from allmydata.node import (
|
||||
PortAssignmentRequired,
|
||||
PrivacyError,
|
||||
)
|
||||
|
||||
def get_pidfile(basedir):
|
||||
"""
|
||||
Returns the path to the PID file.
|
||||
:param basedir: the node's base directory
|
||||
:returns: the path to the PID file
|
||||
"""
|
||||
return os.path.join(basedir, u"twistd.pid")
|
||||
|
||||
def get_pid_from_pidfile(pidfile):
|
||||
"""
|
||||
Tries to read and return the PID stored in the node's PID file
|
||||
(twistd.pid).
|
||||
:param pidfile: try to read this PID file
|
||||
:returns: A numeric PID on success, ``None`` if PID file absent or
|
||||
inaccessible, ``-1`` if PID file invalid.
|
||||
"""
|
||||
try:
|
||||
with open(pidfile, "r") as f:
|
||||
pid = f.read()
|
||||
except EnvironmentError:
|
||||
return None
|
||||
|
||||
try:
|
||||
pid = int(pid)
|
||||
except ValueError:
|
||||
return -1
|
||||
|
||||
return pid
|
||||
|
||||
def identify_node_type(basedir):
|
||||
"""
|
||||
:return unicode: None or one of: 'client' or 'introducer'.
|
||||
"""
|
||||
tac = u''
|
||||
try:
|
||||
for fn in listdir_unicode(basedir):
|
||||
if fn.endswith(u".tac"):
|
||||
tac = fn
|
||||
break
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
for t in (u"client", u"introducer"):
|
||||
if t in tac:
|
||||
return t
|
||||
return None
|
||||
|
||||
|
||||
class RunOptions(BasedirOptions):
|
||||
subcommand_name = "run"
|
||||
|
||||
def postOptions(self):
|
||||
self.twistd_args += ("--nodaemon",)
|
||||
optParameters = [
|
||||
("basedir", "C", None,
|
||||
"Specify which Tahoe base directory should be used."
|
||||
" This has the same effect as the global --node-directory option."
|
||||
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
|
||||
]
|
||||
|
||||
def parseArgs(self, basedir=None, *twistd_args):
|
||||
# This can't handle e.g. 'tahoe run --reactor=foo', since
|
||||
# '--reactor=foo' looks like an option to the tahoe subcommand, not to
|
||||
# twistd. So you can either use 'tahoe run' or 'tahoe run NODEDIR
|
||||
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR run
|
||||
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
|
||||
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
self.twistd_args = twistd_args
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] %s [options]"
|
||||
" [NODEDIR [twistd-options]]"
|
||||
% (self.command_name, self.subcommand_name))
|
||||
|
||||
def getUsage(self, width=None):
|
||||
t = BasedirOptions.getUsage(self, width) + "\n"
|
||||
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
|
||||
t += twistd_options.replace("Options:", "twistd-options:", 1)
|
||||
t += """
|
||||
|
||||
Note that if any twistd-options are used, NODEDIR must be specified explicitly
|
||||
(not by default or using -C/--basedir or -d/--node-directory), and followed by
|
||||
the twistd-options.
|
||||
"""
|
||||
return t
|
||||
|
||||
|
||||
class MyTwistdConfig(twistd.ServerOptions):
|
||||
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
|
||||
|
||||
stderr = sys.stderr
|
||||
|
||||
|
||||
class DaemonizeTheRealService(Service, HookMixin):
|
||||
"""
|
||||
this HookMixin should really be a helper; our hooks:
|
||||
|
||||
- 'running': triggered when startup has completed; it triggers
|
||||
with None of successful or a Failure otherwise.
|
||||
"""
|
||||
stderr = sys.stderr
|
||||
|
||||
def __init__(self, nodetype, basedir, options):
|
||||
super(DaemonizeTheRealService, self).__init__()
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
# setup for HookMixin
|
||||
self._hooks = {
|
||||
"running": None,
|
||||
}
|
||||
self.stderr = options.parent.stderr
|
||||
|
||||
def startService(self):
|
||||
|
||||
def start():
|
||||
node_to_instance = {
|
||||
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
|
||||
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
|
||||
}
|
||||
|
||||
try:
|
||||
service_factory = node_to_instance[self.nodetype]
|
||||
except KeyError:
|
||||
raise ValueError("unknown nodetype %s" % self.nodetype)
|
||||
|
||||
def handle_config_error(reason):
|
||||
if reason.check(UnknownConfigError):
|
||||
self.stderr.write("\nConfiguration error:\n{}\n\n".format(reason.value))
|
||||
elif reason.check(PortAssignmentRequired):
|
||||
self.stderr.write("\ntub.port cannot be 0: you must choose.\n\n")
|
||||
elif reason.check(PrivacyError):
|
||||
self.stderr.write("\n{}\n\n".format(reason.value))
|
||||
else:
|
||||
self.stderr.write("\nUnknown error\n")
|
||||
reason.printTraceback(self.stderr)
|
||||
reactor.stop()
|
||||
|
||||
d = service_factory()
|
||||
|
||||
def created(srv):
|
||||
srv.setServiceParent(self.parent)
|
||||
d.addCallback(created)
|
||||
d.addErrback(handle_config_error)
|
||||
d.addBoth(self._call_hook, 'running')
|
||||
return d
|
||||
|
||||
from twisted.internet import reactor
|
||||
reactor.callWhenRunning(start)
|
||||
|
||||
|
||||
class DaemonizeTahoeNodePlugin(object):
|
||||
tapname = "tahoenode"
|
||||
def __init__(self, nodetype, basedir):
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
|
||||
def makeService(self, so):
|
||||
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
|
||||
|
||||
|
||||
def run(config):
|
||||
"""
|
||||
Runs a Tahoe-LAFS node in the foreground.
|
||||
|
||||
Sets up the IService instance corresponding to the type of node
|
||||
that's starting and uses Twisted's twistd runner to disconnect our
|
||||
process from the terminal.
|
||||
"""
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
|
||||
if not os.path.isdir(basedir):
|
||||
print("%s does not look like a directory at all" % quoted_basedir, file=err)
|
||||
return 1
|
||||
nodetype = identify_node_type(basedir)
|
||||
if not nodetype:
|
||||
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
|
||||
return 1
|
||||
# Now prepare to turn into a twistd process. This os.chdir is the point
|
||||
# of no return.
|
||||
os.chdir(basedir)
|
||||
twistd_args = ["--nodaemon"]
|
||||
twistd_args.extend(config.twistd_args)
|
||||
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
|
||||
|
||||
twistd_config = MyTwistdConfig()
|
||||
twistd_config.stdout = out
|
||||
twistd_config.stderr = err
|
||||
try:
|
||||
twistd_config.parseOptions(twistd_args)
|
||||
except usage.error as ue:
|
||||
# these arguments were unsuitable for 'twistd'
|
||||
print(config, file=err)
|
||||
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
|
||||
return 1
|
||||
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
|
||||
|
||||
# handle invalid PID file (twistd might not start otherwise)
|
||||
pidfile = get_pidfile(basedir)
|
||||
if get_pid_from_pidfile(pidfile) == -1:
|
||||
print("found invalid PID file in %s - deleting it" % basedir, file=err)
|
||||
os.remove(pidfile)
|
||||
|
||||
# We always pass --nodaemon so twistd.runApp does not daemonize.
|
||||
print("running node in %s" % (quoted_basedir,), file=out)
|
||||
twistd.runApp(twistd_config)
|
||||
return 0
|
||||
|
@ -1,152 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import io
|
||||
import sys
|
||||
import time
|
||||
import subprocess
|
||||
from os.path import join, exists
|
||||
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util.encodingutil import quote_local_unicode_path
|
||||
|
||||
from .run_common import MyTwistdConfig, identify_node_type
|
||||
|
||||
|
||||
class StartOptions(BasedirOptions):
|
||||
subcommand_name = "start"
|
||||
optParameters = [
|
||||
("basedir", "C", None,
|
||||
"Specify which Tahoe base directory should be used."
|
||||
" This has the same effect as the global --node-directory option."
|
||||
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
|
||||
]
|
||||
|
||||
def parseArgs(self, basedir=None, *twistd_args):
|
||||
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
|
||||
# looks like an option to the tahoe subcommand, not to twistd. So you
|
||||
# can either use 'tahoe start' or 'tahoe start NODEDIR
|
||||
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
|
||||
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
|
||||
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
self.twistd_args = twistd_args
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] %s [options]"
|
||||
" [NODEDIR [twistd-options]]"
|
||||
% (self.command_name, self.subcommand_name))
|
||||
|
||||
def getUsage(self, width=None):
|
||||
t = BasedirOptions.getUsage(self, width) + "\n"
|
||||
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
|
||||
t += twistd_options.replace("Options:", "twistd-options:", 1)
|
||||
t += """
|
||||
|
||||
Note that if any twistd-options are used, NODEDIR must be specified explicitly
|
||||
(not by default or using -C/--basedir or -d/--node-directory), and followed by
|
||||
the twistd-options.
|
||||
"""
|
||||
return t
|
||||
|
||||
|
||||
def start(config):
|
||||
"""
|
||||
Start a tahoe node (daemonize it and confirm startup)
|
||||
|
||||
We run 'tahoe daemonize' with all the options given to 'tahoe
|
||||
start' and then watch the log files for the correct text to appear
|
||||
(e.g. "introducer started"). If that doesn't happen within a few
|
||||
seconds, an error is printed along with all collected logs.
|
||||
"""
|
||||
print("'tahoe start' is deprecated; see 'tahoe run'")
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("STARTING", quoted_basedir, file=out)
|
||||
if not os.path.isdir(basedir):
|
||||
print("%s does not look like a directory at all" % quoted_basedir, file=err)
|
||||
return 1
|
||||
nodetype = identify_node_type(basedir)
|
||||
if not nodetype:
|
||||
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
|
||||
return 1
|
||||
|
||||
# "tahoe start" attempts to monitor the logs for successful
|
||||
# startup -- but we can't always do that.
|
||||
|
||||
can_monitor_logs = False
|
||||
if (nodetype in (u"client", u"introducer")
|
||||
and "--nodaemon" not in config.twistd_args
|
||||
and "--syslog" not in config.twistd_args
|
||||
and "--logfile" not in config.twistd_args):
|
||||
can_monitor_logs = True
|
||||
|
||||
if "--help" in config.twistd_args:
|
||||
return 0
|
||||
|
||||
if not can_monitor_logs:
|
||||
print("Custom logging options; can't monitor logs for proper startup messages", file=out)
|
||||
return 1
|
||||
|
||||
# before we spawn tahoe, we check if "the log file" exists or not,
|
||||
# and if so remember how big it is -- essentially, we're doing
|
||||
# "tail -f" to see what "this" incarnation of "tahoe daemonize"
|
||||
# spews forth.
|
||||
starting_offset = 0
|
||||
log_fname = join(basedir, 'logs', 'twistd.log')
|
||||
if exists(log_fname):
|
||||
with open(log_fname, 'r') as f:
|
||||
f.seek(0, 2)
|
||||
starting_offset = f.tell()
|
||||
|
||||
# spawn tahoe. Note that since this daemonizes, it should return
|
||||
# "pretty fast" and with a zero return-code, or else something
|
||||
# Very Bad has happened.
|
||||
try:
|
||||
args = [sys.executable] if not getattr(sys, 'frozen', False) else []
|
||||
for i, arg in enumerate(sys.argv):
|
||||
if arg in ['start', 'restart']:
|
||||
args.append('daemonize')
|
||||
else:
|
||||
args.append(arg)
|
||||
subprocess.check_call(args)
|
||||
except subprocess.CalledProcessError as e:
|
||||
return e.returncode
|
||||
|
||||
# now, we have to determine if tahoe has actually started up
|
||||
# successfully or not. so, we start sucking up log files and
|
||||
# looking for "the magic string", which depends on the node type.
|
||||
|
||||
magic_string = u'{} running'.format(nodetype)
|
||||
with io.open(log_fname, 'r') as f:
|
||||
f.seek(starting_offset)
|
||||
|
||||
collected = u''
|
||||
overall_start = time.time()
|
||||
while time.time() - overall_start < 60:
|
||||
this_start = time.time()
|
||||
while time.time() - this_start < 5:
|
||||
collected += f.read()
|
||||
if magic_string in collected:
|
||||
if not config.parent['quiet']:
|
||||
print("Node has started successfully", file=out)
|
||||
return 0
|
||||
if 'Traceback ' in collected:
|
||||
print("Error starting node; see '{}' for more:\n\n{}".format(
|
||||
log_fname,
|
||||
collected,
|
||||
), file=err)
|
||||
return 1
|
||||
time.sleep(0.1)
|
||||
print("Still waiting up to {}s for node startup".format(
|
||||
60 - int(time.time() - overall_start)
|
||||
), file=out)
|
||||
|
||||
print("Something has gone wrong starting the node.", file=out)
|
||||
print("Logs are available in '{}'".format(log_fname), file=out)
|
||||
print("Collected for this run:", file=out)
|
||||
print(collected, file=out)
|
||||
return 1
|
@ -1,85 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import time
|
||||
import signal
|
||||
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from allmydata.util.encodingutil import quote_local_unicode_path
|
||||
from .run_common import get_pidfile, get_pid_from_pidfile
|
||||
|
||||
COULD_NOT_STOP = 2
|
||||
|
||||
|
||||
class StopOptions(BasedirOptions):
|
||||
def parseArgs(self, basedir=None):
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] stop [options] [NODEDIR]"
|
||||
% (self.command_name,))
|
||||
|
||||
|
||||
def stop(config):
|
||||
print("'tahoe stop' is deprecated; see 'tahoe run'")
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("STOPPING", quoted_basedir, file=out)
|
||||
pidfile = get_pidfile(basedir)
|
||||
pid = get_pid_from_pidfile(pidfile)
|
||||
if pid is None:
|
||||
print("%s does not look like a running node directory (no twistd.pid)" % quoted_basedir, file=err)
|
||||
# we define rc=2 to mean "nothing is running, but it wasn't me who
|
||||
# stopped it"
|
||||
return COULD_NOT_STOP
|
||||
elif pid == -1:
|
||||
print("%s contains an invalid PID file" % basedir, file=err)
|
||||
# we define rc=2 to mean "nothing is running, but it wasn't me who
|
||||
# stopped it"
|
||||
return COULD_NOT_STOP
|
||||
|
||||
# kill it hard (SIGKILL), delete the twistd.pid file, then wait for the
|
||||
# process itself to go away. If it hasn't gone away after 20 seconds, warn
|
||||
# the user but keep waiting until they give up.
|
||||
try:
|
||||
os.kill(pid, signal.SIGKILL)
|
||||
except OSError as oserr:
|
||||
if oserr.errno == 3:
|
||||
print(oserr.strerror)
|
||||
# the process didn't exist, so wipe the pid file
|
||||
os.remove(pidfile)
|
||||
return COULD_NOT_STOP
|
||||
else:
|
||||
raise
|
||||
try:
|
||||
os.remove(pidfile)
|
||||
except EnvironmentError:
|
||||
pass
|
||||
start = time.time()
|
||||
time.sleep(0.1)
|
||||
wait = 40
|
||||
first_time = True
|
||||
while True:
|
||||
# poll once per second until we see the process is no longer running
|
||||
try:
|
||||
os.kill(pid, 0)
|
||||
except OSError:
|
||||
print("process %d is dead" % pid, file=out)
|
||||
return
|
||||
wait -= 1
|
||||
if wait < 0:
|
||||
if first_time:
|
||||
print("It looks like pid %d is still running "
|
||||
"after %d seconds" % (pid,
|
||||
(time.time() - start)), file=err)
|
||||
print("I will keep watching it until you interrupt me.", file=err)
|
||||
wait = 10
|
||||
first_time = False
|
||||
else:
|
||||
print("pid %d still running after %d seconds" % \
|
||||
(pid, (time.time() - start)), file=err)
|
||||
wait = 10
|
||||
time.sleep(1)
|
||||
# control never reaches here: no timeout
|
@ -1,79 +1,19 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import json
|
||||
import os
|
||||
import pprint
|
||||
import time
|
||||
from collections import deque
|
||||
|
||||
# Python 2 compatibility
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
|
||||
from twisted.internet import reactor
|
||||
from twisted.application import service
|
||||
from twisted.application.internet import TimerService
|
||||
from zope.interface import implementer
|
||||
from foolscap.api import eventually, DeadReferenceError, Referenceable, Tub
|
||||
from foolscap.api import eventually
|
||||
|
||||
from allmydata.util import log
|
||||
from allmydata.util.encodingutil import quote_local_unicode_path
|
||||
from allmydata.interfaces import RIStatsProvider, RIStatsGatherer, IStatsProducer
|
||||
|
||||
@implementer(IStatsProducer)
|
||||
class LoadMonitor(service.MultiService):
|
||||
|
||||
loop_interval = 1
|
||||
num_samples = 60
|
||||
|
||||
def __init__(self, provider, warn_if_delay_exceeds=1):
|
||||
service.MultiService.__init__(self)
|
||||
self.provider = provider
|
||||
self.warn_if_delay_exceeds = warn_if_delay_exceeds
|
||||
self.started = False
|
||||
self.last = None
|
||||
self.stats = deque()
|
||||
self.timer = None
|
||||
|
||||
def startService(self):
|
||||
if not self.started:
|
||||
self.started = True
|
||||
self.timer = reactor.callLater(self.loop_interval, self.loop)
|
||||
service.MultiService.startService(self)
|
||||
|
||||
def stopService(self):
|
||||
self.started = False
|
||||
if self.timer:
|
||||
self.timer.cancel()
|
||||
self.timer = None
|
||||
return service.MultiService.stopService(self)
|
||||
|
||||
def loop(self):
|
||||
self.timer = None
|
||||
if not self.started:
|
||||
return
|
||||
now = time.time()
|
||||
if self.last is not None:
|
||||
delay = now - self.last - self.loop_interval
|
||||
if delay > self.warn_if_delay_exceeds:
|
||||
log.msg(format='excessive reactor delay (%ss)', args=(delay,),
|
||||
level=log.UNUSUAL)
|
||||
self.stats.append(delay)
|
||||
while len(self.stats) > self.num_samples:
|
||||
self.stats.popleft()
|
||||
|
||||
self.last = now
|
||||
self.timer = reactor.callLater(self.loop_interval, self.loop)
|
||||
|
||||
def get_stats(self):
|
||||
if self.stats:
|
||||
avg = sum(self.stats) / len(self.stats)
|
||||
m_x = max(self.stats)
|
||||
else:
|
||||
avg = m_x = 0
|
||||
return { 'load_monitor.avg_load': avg,
|
||||
'load_monitor.max_load': m_x, }
|
||||
from allmydata.interfaces import IStatsProducer
|
||||
|
||||
@implementer(IStatsProducer)
|
||||
class CPUUsageMonitor(service.MultiService):
|
||||
@ -128,37 +68,18 @@ class CPUUsageMonitor(service.MultiService):
|
||||
return s
|
||||
|
||||
|
||||
@implementer(RIStatsProvider)
|
||||
class StatsProvider(Referenceable, service.MultiService):
|
||||
class StatsProvider(service.MultiService):
|
||||
|
||||
def __init__(self, node, gatherer_furl):
|
||||
def __init__(self, node):
|
||||
service.MultiService.__init__(self)
|
||||
self.node = node
|
||||
self.gatherer_furl = gatherer_furl # might be None
|
||||
|
||||
self.counters = {}
|
||||
self.stats_producers = []
|
||||
|
||||
# only run the LoadMonitor (which submits a timer every second) if
|
||||
# there is a gatherer who is going to be paying attention. Our stats
|
||||
# are visible through HTTP even without a gatherer, so run the rest
|
||||
# of the stats (including the once-per-minute CPUUsageMonitor)
|
||||
if gatherer_furl:
|
||||
self.load_monitor = LoadMonitor(self)
|
||||
self.load_monitor.setServiceParent(self)
|
||||
self.register_producer(self.load_monitor)
|
||||
|
||||
self.cpu_monitor = CPUUsageMonitor()
|
||||
self.cpu_monitor.setServiceParent(self)
|
||||
self.register_producer(self.cpu_monitor)
|
||||
|
||||
def startService(self):
|
||||
if self.node and self.gatherer_furl:
|
||||
nickname_utf8 = self.node.nickname.encode("utf-8")
|
||||
self.node.tub.connectTo(self.gatherer_furl,
|
||||
self._connected, nickname_utf8)
|
||||
service.MultiService.startService(self)
|
||||
|
||||
def count(self, name, delta=1):
|
||||
if isinstance(name, str):
|
||||
name = name.encode("utf-8")
|
||||
@ -175,155 +96,3 @@ class StatsProvider(Referenceable, service.MultiService):
|
||||
ret = { 'counters': self.counters, 'stats': stats }
|
||||
log.msg(format='get_stats() -> %(stats)s', stats=ret, level=log.NOISY)
|
||||
return ret
|
||||
|
||||
def remote_get_stats(self):
|
||||
# The remote API expects keys to be bytes:
|
||||
def to_bytes(d):
|
||||
result = {}
|
||||
for (k, v) in d.items():
|
||||
if isinstance(k, str):
|
||||
k = k.encode("utf-8")
|
||||
result[k] = v
|
||||
return result
|
||||
|
||||
stats = self.get_stats()
|
||||
return {b"counters": to_bytes(stats["counters"]),
|
||||
b"stats": to_bytes(stats["stats"])}
|
||||
|
||||
def _connected(self, gatherer, nickname):
|
||||
gatherer.callRemoteOnly('provide', self, nickname or '')
|
||||
|
||||
|
||||
@implementer(RIStatsGatherer)
|
||||
class StatsGatherer(Referenceable, service.MultiService):
|
||||
|
||||
poll_interval = 60
|
||||
|
||||
def __init__(self, basedir):
|
||||
service.MultiService.__init__(self)
|
||||
self.basedir = basedir
|
||||
|
||||
self.clients = {}
|
||||
self.nicknames = {}
|
||||
|
||||
self.timer = TimerService(self.poll_interval, self.poll)
|
||||
self.timer.setServiceParent(self)
|
||||
|
||||
def get_tubid(self, rref):
|
||||
return rref.getRemoteTubID()
|
||||
|
||||
def remote_provide(self, provider, nickname):
|
||||
tubid = self.get_tubid(provider)
|
||||
if tubid == '<unauth>':
|
||||
print("WARNING: failed to get tubid for %s (%s)" % (provider, nickname))
|
||||
# don't add to clients to poll (polluting data) don't care about disconnect
|
||||
return
|
||||
self.clients[tubid] = provider
|
||||
self.nicknames[tubid] = nickname
|
||||
|
||||
def poll(self):
|
||||
for tubid,client in self.clients.items():
|
||||
nickname = self.nicknames.get(tubid)
|
||||
d = client.callRemote('get_stats')
|
||||
d.addCallbacks(self.got_stats, self.lost_client,
|
||||
callbackArgs=(tubid, nickname),
|
||||
errbackArgs=(tubid,))
|
||||
d.addErrback(self.log_client_error, tubid)
|
||||
|
||||
def lost_client(self, f, tubid):
|
||||
# this is called lazily, when a get_stats request fails
|
||||
del self.clients[tubid]
|
||||
del self.nicknames[tubid]
|
||||
f.trap(DeadReferenceError)
|
||||
|
||||
def log_client_error(self, f, tubid):
|
||||
log.msg("StatsGatherer: error in get_stats(), peerid=%s" % tubid,
|
||||
level=log.UNUSUAL, failure=f)
|
||||
|
||||
def got_stats(self, stats, tubid, nickname):
|
||||
raise NotImplementedError()
|
||||
|
||||
class StdOutStatsGatherer(StatsGatherer):
|
||||
verbose = True
|
||||
def remote_provide(self, provider, nickname):
|
||||
tubid = self.get_tubid(provider)
|
||||
if self.verbose:
|
||||
print('connect "%s" [%s]' % (nickname, tubid))
|
||||
provider.notifyOnDisconnect(self.announce_lost_client, tubid)
|
||||
StatsGatherer.remote_provide(self, provider, nickname)
|
||||
|
||||
def announce_lost_client(self, tubid):
|
||||
print('disconnect "%s" [%s]' % (self.nicknames[tubid], tubid))
|
||||
|
||||
def got_stats(self, stats, tubid, nickname):
|
||||
print('"%s" [%s]:' % (nickname, tubid))
|
||||
pprint.pprint(stats)
|
||||
|
||||
class JSONStatsGatherer(StdOutStatsGatherer):
|
||||
# inherit from StdOutStatsGatherer for connect/disconnect notifications
|
||||
|
||||
def __init__(self, basedir=u".", verbose=True):
|
||||
self.verbose = verbose
|
||||
StatsGatherer.__init__(self, basedir)
|
||||
self.jsonfile = os.path.join(basedir, "stats.json")
|
||||
|
||||
if os.path.exists(self.jsonfile):
|
||||
try:
|
||||
with open(self.jsonfile, 'rb') as f:
|
||||
self.gathered_stats = json.load(f)
|
||||
except Exception:
|
||||
print("Error while attempting to load stats file %s.\n"
|
||||
"You may need to restore this file from a backup,"
|
||||
" or delete it if no backup is available.\n" %
|
||||
quote_local_unicode_path(self.jsonfile))
|
||||
raise
|
||||
else:
|
||||
self.gathered_stats = {}
|
||||
|
||||
def got_stats(self, stats, tubid, nickname):
|
||||
s = self.gathered_stats.setdefault(tubid, {})
|
||||
s['timestamp'] = time.time()
|
||||
s['nickname'] = nickname
|
||||
s['stats'] = stats
|
||||
self.dump_json()
|
||||
|
||||
def dump_json(self):
|
||||
tmp = "%s.tmp" % (self.jsonfile,)
|
||||
with open(tmp, 'wb') as f:
|
||||
json.dump(self.gathered_stats, f)
|
||||
if os.path.exists(self.jsonfile):
|
||||
os.unlink(self.jsonfile)
|
||||
os.rename(tmp, self.jsonfile)
|
||||
|
||||
class StatsGathererService(service.MultiService):
|
||||
furl_file = "stats_gatherer.furl"
|
||||
|
||||
def __init__(self, basedir=".", verbose=False):
|
||||
service.MultiService.__init__(self)
|
||||
self.basedir = basedir
|
||||
self.tub = Tub(certFile=os.path.join(self.basedir,
|
||||
"stats_gatherer.pem"))
|
||||
self.tub.setServiceParent(self)
|
||||
self.tub.setOption("logLocalFailures", True)
|
||||
self.tub.setOption("logRemoteFailures", True)
|
||||
self.tub.setOption("expose-remote-exception-types", False)
|
||||
|
||||
self.stats_gatherer = JSONStatsGatherer(self.basedir, verbose)
|
||||
self.stats_gatherer.setServiceParent(self)
|
||||
|
||||
try:
|
||||
with open(os.path.join(self.basedir, "location")) as f:
|
||||
location = f.read().strip()
|
||||
except EnvironmentError:
|
||||
raise ValueError("Unable to find 'location' in BASEDIR, please rebuild your stats-gatherer")
|
||||
try:
|
||||
with open(os.path.join(self.basedir, "port")) as f:
|
||||
port = f.read().strip()
|
||||
except EnvironmentError:
|
||||
raise ValueError("Unable to find 'port' in BASEDIR, please rebuild your stats-gatherer")
|
||||
|
||||
self.tub.listenOn(port)
|
||||
self.tub.setLocation(location)
|
||||
ff = os.path.join(self.basedir, self.furl_file)
|
||||
self.gatherer_furl = self.tub.registerReference(self.stats_gatherer,
|
||||
furlFile=ff)
|
||||
|
@ -154,6 +154,9 @@ class StorageFarmBroker(service.MultiService):
|
||||
I'm also responsible for subscribing to the IntroducerClient to find out
|
||||
about new servers as they are announced by the Introducer.
|
||||
|
||||
:ivar _tub_maker: A one-argument callable which accepts a dictionary of
|
||||
"handler overrides" and returns a ``foolscap.api.Tub``.
|
||||
|
||||
:ivar StorageClientConfig storage_client_config: Values from the node
|
||||
configuration file relating to storage behavior.
|
||||
"""
|
||||
@ -461,6 +464,7 @@ class StorageFarmBroker(service.MultiService):
|
||||
@implementer(IDisplayableServer)
|
||||
class StubServer(object):
|
||||
def __init__(self, serverid):
|
||||
assert isinstance(serverid, bytes)
|
||||
self.serverid = serverid # binary tubid
|
||||
def get_serverid(self):
|
||||
return self.serverid
|
||||
@ -558,7 +562,11 @@ class _FoolscapStorage(object):
|
||||
}
|
||||
|
||||
*nickname* is optional.
|
||||
|
||||
The furl will be a Unicode string on Python 3; on Python 2 it will be
|
||||
either a native (bytes) string or a Unicode string.
|
||||
"""
|
||||
furl = furl.encode("utf-8")
|
||||
m = re.match(br'pb://(\w+)@', furl)
|
||||
assert m, furl
|
||||
tubid_s = m.group(1).lower()
|
||||
@ -690,7 +698,6 @@ class NativeStorageServer(service.MultiService):
|
||||
@ivar nickname: the server's self-reported nickname (unicode), same
|
||||
|
||||
@ivar rref: the RemoteReference, if connected, otherwise None
|
||||
@ivar remote_host: the IAddress, if connected, otherwise None
|
||||
"""
|
||||
|
||||
VERSION_DEFAULTS = UnicodeKeyDict({
|
||||
@ -716,7 +723,6 @@ class NativeStorageServer(service.MultiService):
|
||||
|
||||
self.last_connect_time = None
|
||||
self.last_loss_time = None
|
||||
self.remote_host = None
|
||||
self._rref = None
|
||||
self._is_connected = False
|
||||
self._reconnector = None
|
||||
@ -755,7 +761,7 @@ class NativeStorageServer(service.MultiService):
|
||||
else:
|
||||
return _FoolscapStorage.from_announcement(
|
||||
self._server_id,
|
||||
furl.encode("utf-8"),
|
||||
furl,
|
||||
ann,
|
||||
storage_server,
|
||||
)
|
||||
@ -767,8 +773,6 @@ class NativeStorageServer(service.MultiService):
|
||||
# Nope
|
||||
pass
|
||||
else:
|
||||
if isinstance(furl, str):
|
||||
furl = furl.encode("utf-8")
|
||||
# See comment above for the _storage_from_foolscap_plugin case
|
||||
# about passing in get_rref.
|
||||
storage_server = _StorageServer(get_rref=self.get_rref)
|
||||
@ -825,8 +829,6 @@ class NativeStorageServer(service.MultiService):
|
||||
return None
|
||||
def get_announcement(self):
|
||||
return self.announcement
|
||||
def get_remote_host(self):
|
||||
return self.remote_host
|
||||
|
||||
def get_connection_status(self):
|
||||
last_received = None
|
||||
@ -874,7 +876,6 @@ class NativeStorageServer(service.MultiService):
|
||||
level=log.NOISY, parent=lp)
|
||||
|
||||
self.last_connect_time = time.time()
|
||||
self.remote_host = rref.getLocationHints()
|
||||
self._rref = rref
|
||||
self._is_connected = True
|
||||
rref.notifyOnDisconnect(self._lost)
|
||||
@ -900,7 +901,6 @@ class NativeStorageServer(service.MultiService):
|
||||
# get_connected_servers() or get_servers_for_psi()) can continue to
|
||||
# use s.get_rref().callRemote() and not worry about it being None.
|
||||
self._is_connected = False
|
||||
self.remote_host = None
|
||||
|
||||
def stop_connecting(self):
|
||||
# used when this descriptor has been superceded by another
|
||||
|
@ -113,4 +113,5 @@ if sys.platform == "win32":
|
||||
initialize()
|
||||
|
||||
from eliot import to_file
|
||||
to_file(open("eliot.log", "w"))
|
||||
from allmydata.util.jsonbytes import BytesJSONEncoder
|
||||
to_file(open("eliot.log", "w"), encoder=BytesJSONEncoder)
|
||||
|
@ -16,22 +16,19 @@ that this script does not import anything from tahoe directly, so it doesn't
|
||||
matter what its PYTHONPATH is, as long as the bin/tahoe that it uses is
|
||||
functional.
|
||||
|
||||
This script expects that the client node will be not running when the script
|
||||
starts, but it will forcibly shut down the node just to be sure. It will shut
|
||||
down the node after the test finishes.
|
||||
This script expects the client node to be running already.
|
||||
|
||||
To set up the client node, do the following:
|
||||
|
||||
tahoe create-client DIR
|
||||
populate DIR/introducer.furl
|
||||
tahoe start DIR
|
||||
tahoe add-alias -d DIR testgrid `tahoe mkdir -d DIR`
|
||||
pick a 10kB-ish test file, compute its md5sum
|
||||
tahoe put -d DIR FILE testgrid:old.MD5SUM
|
||||
tahoe put -d DIR FILE testgrid:recent.MD5SUM
|
||||
tahoe put -d DIR FILE testgrid:recentdir/recent.MD5SUM
|
||||
echo "" | tahoe put -d DIR --mutable testgrid:log
|
||||
echo "" | tahoe put -d DIR --mutable testgrid:recentlog
|
||||
tahoe create-client --introducer=INTRODUCER_FURL DIR
|
||||
tahoe run DIR
|
||||
tahoe -d DIR create-alias testgrid
|
||||
# pick a 10kB-ish test file, compute its md5sum
|
||||
tahoe -d DIR put FILE testgrid:old.MD5SUM
|
||||
tahoe -d DIR put FILE testgrid:recent.MD5SUM
|
||||
tahoe -d DIR put FILE testgrid:recentdir/recent.MD5SUM
|
||||
echo "" | tahoe -d DIR put --mutable - testgrid:log
|
||||
echo "" | tahoe -d DIR put --mutable - testgrid:recentlog
|
||||
|
||||
This script will perform the following steps (the kind of compatibility that
|
||||
is being tested is in [brackets]):
|
||||
@ -52,7 +49,6 @@ is being tested is in [brackets]):
|
||||
|
||||
This script will also keep track of speeds and latencies and will write them
|
||||
in a machine-readable logfile.
|
||||
|
||||
"""
|
||||
|
||||
import time, subprocess, md5, os.path, random
|
||||
@ -104,26 +100,13 @@ class GridTester(object):
|
||||
|
||||
def cli(self, cmd, *args, **kwargs):
|
||||
print("tahoe", cmd, " ".join(args))
|
||||
stdout, stderr = self.command(self.tahoe, cmd, "-d", self.nodedir,
|
||||
stdout, stderr = self.command(self.tahoe, "-d", self.nodedir, cmd,
|
||||
*args, **kwargs)
|
||||
if not kwargs.get("ignore_stderr", False) and stderr != "":
|
||||
raise CommandFailed("command '%s' had stderr: %s" % (" ".join(args),
|
||||
stderr))
|
||||
return stdout
|
||||
|
||||
def stop_old_node(self):
|
||||
print("tahoe stop", self.nodedir, "(force)")
|
||||
self.command(self.tahoe, "stop", self.nodedir, expected_rc=None)
|
||||
|
||||
def start_node(self):
|
||||
print("tahoe start", self.nodedir)
|
||||
self.command(self.tahoe, "start", self.nodedir)
|
||||
time.sleep(5)
|
||||
|
||||
def stop_node(self):
|
||||
print("tahoe stop", self.nodedir)
|
||||
self.command(self.tahoe, "stop", self.nodedir)
|
||||
|
||||
def read_and_check(self, f):
|
||||
expected_md5_s = f[f.find(".")+1:]
|
||||
out = self.cli("get", "testgrid:" + f)
|
||||
@ -204,19 +187,11 @@ class GridTester(object):
|
||||
fn = prefix + "." + md5sum
|
||||
return fn, data
|
||||
|
||||
def run(self):
|
||||
self.stop_old_node()
|
||||
self.start_node()
|
||||
try:
|
||||
self.do_test()
|
||||
finally:
|
||||
self.stop_node()
|
||||
|
||||
def main():
|
||||
config = GridTesterOptions()
|
||||
config.parseOptions()
|
||||
gt = GridTester(config)
|
||||
gt.run()
|
||||
gt.do_test()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
@ -8,6 +8,9 @@ if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
from six.moves import cStringIO as StringIO
|
||||
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.internet import defer, reactor, protocol, error
|
||||
from twisted.application import service, internet
|
||||
from twisted.web import client as tw_client
|
||||
@ -21,6 +24,10 @@ from allmydata.util import fileutil, pollmixin
|
||||
from allmydata.util.fileutil import abspath_expanduser_unicode
|
||||
from allmydata.util.encodingutil import get_filesystem_encoding
|
||||
|
||||
from allmydata.scripts.common import (
|
||||
write_introducer,
|
||||
)
|
||||
|
||||
class StallableHTTPGetterDiscarder(tw_client.HTTPPageGetter, object):
|
||||
full_speed_ahead = False
|
||||
_bytes_so_far = 0
|
||||
@ -180,16 +187,18 @@ class SystemFramework(pollmixin.PollMixin):
|
||||
self.introducer_furl = self.introducer.introducer_url
|
||||
|
||||
def make_nodes(self):
|
||||
root = FilePath(self.testdir)
|
||||
self.nodes = []
|
||||
for i in range(self.numnodes):
|
||||
nodedir = os.path.join(self.testdir, "node%d" % i)
|
||||
os.mkdir(nodedir)
|
||||
f = open(os.path.join(nodedir, "tahoe.cfg"), "w")
|
||||
f.write("[client]\n"
|
||||
"introducer.furl = %s\n"
|
||||
"shares.happy = 1\n"
|
||||
"[storage]\n"
|
||||
% (self.introducer_furl,))
|
||||
nodedir = root.child("node%d" % (i,))
|
||||
private = nodedir.child("private")
|
||||
private.makedirs()
|
||||
write_introducer(nodedir, "default", self.introducer_url)
|
||||
config = (
|
||||
"[client]\n"
|
||||
"shares.happy = 1\n"
|
||||
"[storage]\n"
|
||||
)
|
||||
# the only tests for which we want the internal nodes to actually
|
||||
# retain shares are the ones where somebody's going to download
|
||||
# them.
|
||||
@ -200,13 +209,13 @@ class SystemFramework(pollmixin.PollMixin):
|
||||
# for these tests, we tell the storage servers to pretend to
|
||||
# accept shares, but really just throw them out, since we're
|
||||
# only testing upload and not download.
|
||||
f.write("debug_discard = true\n")
|
||||
config += "debug_discard = true\n"
|
||||
if self.mode in ("receive",):
|
||||
# for this mode, the client-under-test gets all the shares,
|
||||
# so our internal nodes can refuse requests
|
||||
f.write("readonly = true\n")
|
||||
f.close()
|
||||
c = client.Client(basedir=nodedir)
|
||||
config += "readonly = true\n"
|
||||
nodedir.child("tahoe.cfg").setContent(config)
|
||||
c = client.Client(basedir=nodedir.path)
|
||||
c.setServiceParent(self)
|
||||
self.nodes.append(c)
|
||||
# the peers will start running, eventually they will connect to each
|
||||
@ -235,16 +244,16 @@ this file are ignored.
|
||||
quiet = StringIO()
|
||||
create_node.create_node({'basedir': clientdir}, out=quiet)
|
||||
log.msg("DONE MAKING CLIENT")
|
||||
write_introducer(clientdir, "default", self.introducer_furl)
|
||||
# now replace tahoe.cfg
|
||||
# set webport=0 and then ask the node what port it picked.
|
||||
f = open(os.path.join(clientdir, "tahoe.cfg"), "w")
|
||||
f.write("[node]\n"
|
||||
"web.port = tcp:0:interface=127.0.0.1\n"
|
||||
"[client]\n"
|
||||
"introducer.furl = %s\n"
|
||||
"shares.happy = 1\n"
|
||||
"[storage]\n"
|
||||
% (self.introducer_furl,))
|
||||
)
|
||||
|
||||
if self.mode in ("upload-self", "receive"):
|
||||
# accept and store shares, to trigger the memory consumption bugs
|
||||
|
@ -1,6 +1,6 @@
|
||||
from ...util.encodingutil import unicode_to_argv
|
||||
from ...scripts import runner
|
||||
from ..common_util import ReallyEqualMixin, run_cli
|
||||
from ..common_util import ReallyEqualMixin, run_cli, run_cli_unicode
|
||||
|
||||
def parse_options(basedir, command, args):
|
||||
o = runner.Options()
|
||||
@ -10,10 +10,41 @@ def parse_options(basedir, command, args):
|
||||
return o
|
||||
|
||||
class CLITestMixin(ReallyEqualMixin):
|
||||
def do_cli(self, verb, *args, **kwargs):
|
||||
"""
|
||||
A mixin for use with ``GridTestMixin`` to execute CLI commands against
|
||||
nodes created by methods of that mixin.
|
||||
"""
|
||||
def do_cli_unicode(self, verb, argv, client_num=0, **kwargs):
|
||||
"""
|
||||
Run a Tahoe-LAFS CLI command.
|
||||
|
||||
:param verb: See ``run_cli_unicode``.
|
||||
|
||||
:param argv: See ``run_cli_unicode``.
|
||||
|
||||
:param int client_num: The number of the ``GridTestMixin``-created
|
||||
node against which to execute the command.
|
||||
|
||||
:param kwargs: Additional keyword arguments to pass to
|
||||
``run_cli_unicode``.
|
||||
"""
|
||||
# client_num is used to execute client CLI commands on a specific
|
||||
# client.
|
||||
client_num = kwargs.get("client_num", 0)
|
||||
client_dir = self.get_clientdir(i=client_num)
|
||||
nodeargs = [ u"--node-directory", client_dir ]
|
||||
return run_cli_unicode(verb, argv, nodeargs=nodeargs, **kwargs)
|
||||
|
||||
|
||||
def do_cli(self, verb, *args, **kwargs):
|
||||
"""
|
||||
Like ``do_cli_unicode`` but work with ``bytes`` everywhere instead of
|
||||
``unicode``.
|
||||
|
||||
Where possible, prefer ``do_cli_unicode``.
|
||||
"""
|
||||
# client_num is used to execute client CLI commands on a specific
|
||||
# client.
|
||||
client_num = kwargs.pop("client_num", 0)
|
||||
client_dir = unicode_to_argv(self.get_clientdir(i=client_num))
|
||||
nodeargs = [ "--node-directory", client_dir ]
|
||||
return run_cli(verb, nodeargs=nodeargs, *args, **kwargs)
|
||||
nodeargs = [ b"--node-directory", client_dir ]
|
||||
return run_cli(verb, *args, nodeargs=nodeargs, **kwargs)
|
||||
|
@ -1,105 +1,126 @@
|
||||
import json
|
||||
from mock import patch
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.internet.defer import inlineCallbacks
|
||||
|
||||
from allmydata.util.encodingutil import unicode_to_argv
|
||||
from allmydata.scripts.common import get_aliases
|
||||
from allmydata.test.no_network import GridTestMixin
|
||||
from .common import CLITestMixin
|
||||
from ..common_util import skip_if_cannot_represent_argv
|
||||
from allmydata.util import encodingutil
|
||||
|
||||
# see also test_create_alias
|
||||
|
||||
class ListAlias(GridTestMixin, CLITestMixin, unittest.TestCase):
|
||||
|
||||
@inlineCallbacks
|
||||
def test_list(self):
|
||||
self.basedir = "cli/ListAlias/test_list"
|
||||
def _check_create_alias(self, alias, encoding):
|
||||
"""
|
||||
Verify that ``tahoe create-alias`` can be used to create an alias named
|
||||
``alias`` when argv is encoded using ``encoding``.
|
||||
|
||||
:param unicode alias: The alias to try to create.
|
||||
|
||||
:param NoneType|str encoding: The name of an encoding to force the
|
||||
``create-alias`` implementation to use. This simulates the
|
||||
effects of setting LANG and doing other locale-foolishness without
|
||||
actually having to mess with this process's global locale state.
|
||||
If this is ``None`` then the encoding used will be ascii but the
|
||||
stdio objects given to the code under test will not declare any
|
||||
encoding (this is like Python 2 when stdio is not a tty).
|
||||
|
||||
:return Deferred: A Deferred that fires with success if the alias can
|
||||
be created and that creation is reported on stdout appropriately
|
||||
encoded or with failure if something goes wrong.
|
||||
"""
|
||||
self.basedir = self.mktemp()
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"create-alias",
|
||||
unicode_to_argv(u"tahoe"),
|
||||
# We can pass an encoding into the test utilities to invoke the code
|
||||
# under test but we can't pass such a parameter directly to the code
|
||||
# under test. Instead, that code looks at io_encoding. So,
|
||||
# monkey-patch that value to our desired value here. This is the code
|
||||
# that most directly takes the place of messing with LANG or the
|
||||
# locale module.
|
||||
self.patch(encodingutil, "io_encoding", encoding or "ascii")
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli_unicode(
|
||||
u"create-alias",
|
||||
[alias],
|
||||
encoding=encoding,
|
||||
)
|
||||
|
||||
self.failUnless(unicode_to_argv(u"Alias 'tahoe' created") in stdout)
|
||||
self.failIf(stderr)
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.failUnless(u"tahoe" in aliases)
|
||||
self.failUnless(aliases[u"tahoe"].startswith("URI:DIR2:"))
|
||||
# Make sure the result of the create-alias command is as we want it to
|
||||
# be.
|
||||
self.assertEqual(u"Alias '{}' created\n".format(alias), stdout)
|
||||
self.assertEqual("", stderr)
|
||||
self.assertEqual(0, rc)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli("list-aliases", "--json")
|
||||
# Make sure it had the intended side-effect, too - an alias created in
|
||||
# the node filesystem state.
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.assertIn(alias, aliases)
|
||||
self.assertTrue(aliases[alias].startswith(u"URI:DIR2:"))
|
||||
|
||||
# And inspect the state via the user interface list-aliases command
|
||||
# too.
|
||||
rc, stdout, stderr = yield self.do_cli_unicode(
|
||||
u"list-aliases",
|
||||
[u"--json"],
|
||||
encoding=encoding,
|
||||
)
|
||||
|
||||
self.assertEqual(0, rc)
|
||||
data = json.loads(stdout)
|
||||
self.assertIn(u"tahoe", data)
|
||||
data = data[u"tahoe"]
|
||||
self.assertIn("readwrite", data)
|
||||
self.assertIn("readonly", data)
|
||||
self.assertIn(alias, data)
|
||||
data = data[alias]
|
||||
self.assertIn(u"readwrite", data)
|
||||
self.assertIn(u"readonly", data)
|
||||
|
||||
@inlineCallbacks
|
||||
def test_list_unicode_mismatch_json(self):
|
||||
"""
|
||||
pretty hack-y test, but we want to cover the 'except' on Unicode
|
||||
errors paths and I can't come up with a nicer way to trigger
|
||||
this
|
||||
"""
|
||||
self.basedir = "cli/ListAlias/test_list_unicode_mismatch_json"
|
||||
skip_if_cannot_represent_argv(u"tahoe\u263A")
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"create-alias",
|
||||
unicode_to_argv(u"tahoe\u263A"),
|
||||
def test_list_none(self):
|
||||
"""
|
||||
An alias composed of all ASCII-encodeable code points can be created when
|
||||
stdio aren't clearly marked with an encoding.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"tahoe",
|
||||
encoding=None,
|
||||
)
|
||||
|
||||
self.failUnless(unicode_to_argv(u"Alias 'tahoe\u263A' created") in stdout)
|
||||
self.failIf(stderr)
|
||||
|
||||
booms = []
|
||||
|
||||
def boom(out, indent=4):
|
||||
if not len(booms):
|
||||
booms.append(out)
|
||||
raise UnicodeEncodeError("foo", u"foo", 3, 5, "foo")
|
||||
return str(out)
|
||||
|
||||
with patch("allmydata.scripts.tahoe_add_alias.json.dumps", boom):
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.failUnless(u"tahoe\u263A" in aliases)
|
||||
self.failUnless(aliases[u"tahoe\u263A"].startswith("URI:DIR2:"))
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli("list-aliases", "--json")
|
||||
|
||||
self.assertEqual(1, rc)
|
||||
self.assertIn("could not be converted", stderr)
|
||||
|
||||
@inlineCallbacks
|
||||
def test_list_unicode_mismatch(self):
|
||||
self.basedir = "cli/ListAlias/test_list_unicode_mismatch"
|
||||
skip_if_cannot_represent_argv(u"tahoe\u263A")
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"create-alias",
|
||||
unicode_to_argv(u"tahoe\u263A"),
|
||||
def test_list_ascii(self):
|
||||
"""
|
||||
An alias composed of all ASCII-encodeable code points can be created when
|
||||
the active encoding is ASCII.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"tahoe",
|
||||
encoding="ascii",
|
||||
)
|
||||
|
||||
def boom(out):
|
||||
print("boom {}".format(out))
|
||||
return out
|
||||
raise UnicodeEncodeError("foo", u"foo", 3, 5, "foo")
|
||||
|
||||
with patch("allmydata.scripts.tahoe_add_alias.unicode_to_output", boom):
|
||||
self.failUnless(unicode_to_argv(u"Alias 'tahoe\u263A' created") in stdout)
|
||||
self.failIf(stderr)
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.failUnless(u"tahoe\u263A" in aliases)
|
||||
self.failUnless(aliases[u"tahoe\u263A"].startswith("URI:DIR2:"))
|
||||
def test_list_latin_1(self):
|
||||
"""
|
||||
An alias composed of all Latin-1-encodeable code points can be created
|
||||
when the active encoding is Latin-1.
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli("list-aliases")
|
||||
This is very similar to ``test_list_utf_8`` but the assumption of
|
||||
UTF-8 is nearly ubiquitous and explicitly exercising the codepaths
|
||||
with a UTF-8-incompatible encoding helps flush out unintentional UTF-8
|
||||
assumptions.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"taho\N{LATIN SMALL LETTER E WITH ACUTE}",
|
||||
encoding="latin-1",
|
||||
)
|
||||
|
||||
self.assertEqual(1, rc)
|
||||
self.assertIn("could not be converted", stderr)
|
||||
|
||||
def test_list_utf_8(self):
|
||||
"""
|
||||
An alias composed of all UTF-8-encodeable code points can be created when
|
||||
the active encoding is UTF-8.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"tahoe\N{SNOWMAN}",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
@ -20,14 +20,14 @@ from allmydata.scripts.common_http import socket_error
|
||||
import allmydata.scripts.common_http
|
||||
|
||||
# Test that the scripts can be imported.
|
||||
from allmydata.scripts import create_node, debug, tahoe_start, tahoe_restart, \
|
||||
from allmydata.scripts import create_node, debug, \
|
||||
tahoe_add_alias, tahoe_backup, tahoe_check, tahoe_cp, tahoe_get, tahoe_ls, \
|
||||
tahoe_manifest, tahoe_mkdir, tahoe_mv, tahoe_put, tahoe_unlink, tahoe_webopen, \
|
||||
tahoe_stop, tahoe_daemonize, tahoe_run
|
||||
_hush_pyflakes = [create_node, debug, tahoe_start, tahoe_restart, tahoe_stop,
|
||||
tahoe_run
|
||||
_hush_pyflakes = [create_node, debug,
|
||||
tahoe_add_alias, tahoe_backup, tahoe_check, tahoe_cp, tahoe_get, tahoe_ls,
|
||||
tahoe_manifest, tahoe_mkdir, tahoe_mv, tahoe_put, tahoe_unlink, tahoe_webopen,
|
||||
tahoe_daemonize, tahoe_run]
|
||||
tahoe_run]
|
||||
|
||||
from allmydata.scripts import common
|
||||
from allmydata.scripts.common import DEFAULT_ALIAS, get_aliases, get_alias, \
|
||||
@ -626,18 +626,6 @@ class Help(unittest.TestCase):
|
||||
help = str(cli.ListAliasesOptions())
|
||||
self.failUnlessIn("[options]", help)
|
||||
|
||||
def test_start(self):
|
||||
help = str(tahoe_start.StartOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
|
||||
|
||||
def test_stop(self):
|
||||
help = str(tahoe_stop.StopOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR]", help)
|
||||
|
||||
def test_restart(self):
|
||||
help = str(tahoe_restart.RestartOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
|
||||
|
||||
def test_run(self):
|
||||
help = str(tahoe_run.RunOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
|
||||
@ -1269,82 +1257,69 @@ class Options(ReallyEqualMixin, unittest.TestCase):
|
||||
self.failUnlessIn(allmydata.__full_version__, stdout.getvalue())
|
||||
# but "tahoe SUBCOMMAND --version" should be rejected
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--version"])
|
||||
["run", "--version"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--version-and-path"])
|
||||
["run", "--version-and-path"])
|
||||
|
||||
def test_quiet(self):
|
||||
# accepted as an overall option, but not on subcommands
|
||||
o = self.parse(["--quiet", "start"])
|
||||
o = self.parse(["--quiet", "run"])
|
||||
self.failUnless(o.parent["quiet"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--quiet"])
|
||||
["run", "--quiet"])
|
||||
|
||||
def test_basedir(self):
|
||||
# accept a --node-directory option before the verb, or a --basedir
|
||||
# option after, or a basedir argument after, but none in the wrong
|
||||
# place, and not more than one of the three.
|
||||
o = self.parse(["start"])
|
||||
|
||||
# Here is some option twistd recognizes but we don't. Depending on
|
||||
# where it appears, it should be passed through to twistd. It doesn't
|
||||
# really matter which option it is (it doesn't even have to be a valid
|
||||
# option). This test does not actually run any of the twistd argument
|
||||
# parsing.
|
||||
some_twistd_option = "--spew"
|
||||
|
||||
o = self.parse(["run"])
|
||||
self.failUnlessReallyEqual(o["basedir"], os.path.join(fileutil.abspath_expanduser_unicode(u"~"),
|
||||
u".tahoe"))
|
||||
o = self.parse(["start", "here"])
|
||||
o = self.parse(["run", "here"])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"here"))
|
||||
o = self.parse(["start", "--basedir", "there"])
|
||||
o = self.parse(["run", "--basedir", "there"])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"there"))
|
||||
o = self.parse(["--node-directory", "there", "start"])
|
||||
o = self.parse(["--node-directory", "there", "run"])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"there"))
|
||||
|
||||
o = self.parse(["start", "here", "--nodaemon"])
|
||||
o = self.parse(["run", "here", some_twistd_option])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"here"))
|
||||
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--basedir", "there", "start"])
|
||||
["--basedir", "there", "run"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--node-directory", "there"])
|
||||
["run", "--node-directory", "there"])
|
||||
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there",
|
||||
"start", "--basedir=here"])
|
||||
"run", "--basedir=here"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--basedir=here", "anywhere"])
|
||||
["run", "--basedir=here", "anywhere"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there",
|
||||
"start", "anywhere"])
|
||||
"run", "anywhere"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there",
|
||||
"start", "--basedir=here", "anywhere"])
|
||||
"run", "--basedir=here", "anywhere"])
|
||||
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there", "start", "--nodaemon"])
|
||||
["--node-directory=there", "run", some_twistd_option])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--basedir=here", "--nodaemon"])
|
||||
["run", "--basedir=here", some_twistd_option])
|
||||
|
||||
|
||||
class Stop(unittest.TestCase):
|
||||
def test_non_numeric_pid(self):
|
||||
"""
|
||||
If the pidfile exists but does not contain a numeric value, a complaint to
|
||||
this effect is written to stderr and the non-success result is
|
||||
returned.
|
||||
"""
|
||||
basedir = FilePath(self.mktemp().decode("ascii"))
|
||||
basedir.makedirs()
|
||||
basedir.child(u"twistd.pid").setContent(b"foo")
|
||||
class Run(unittest.TestCase):
|
||||
|
||||
config = tahoe_stop.StopOptions()
|
||||
config.stdout = StringIO()
|
||||
config.stderr = StringIO()
|
||||
config['basedir'] = basedir.path
|
||||
|
||||
result_code = tahoe_stop.stop(config)
|
||||
self.assertEqual(2, result_code)
|
||||
self.assertIn("invalid PID file", config.stderr.getvalue())
|
||||
|
||||
|
||||
class Start(unittest.TestCase):
|
||||
|
||||
@patch('allmydata.scripts.run_common.os.chdir')
|
||||
@patch('allmydata.scripts.run_common.twistd')
|
||||
@patch('allmydata.scripts.tahoe_run.os.chdir')
|
||||
@patch('allmydata.scripts.tahoe_run.twistd')
|
||||
def test_non_numeric_pid(self, mock_twistd, chdir):
|
||||
"""
|
||||
If the pidfile exists but does not contain a numeric value, a complaint to
|
||||
@ -1355,13 +1330,13 @@ class Start(unittest.TestCase):
|
||||
basedir.child(u"twistd.pid").setContent(b"foo")
|
||||
basedir.child(u"tahoe-client.tac").setContent(b"")
|
||||
|
||||
config = tahoe_daemonize.DaemonizeOptions()
|
||||
config = tahoe_run.RunOptions()
|
||||
config.stdout = StringIO()
|
||||
config.stderr = StringIO()
|
||||
config['basedir'] = basedir.path
|
||||
config.twistd_args = []
|
||||
|
||||
result_code = tahoe_daemonize.daemonize(config)
|
||||
result_code = tahoe_run.run(config)
|
||||
self.assertIn("invalid PID file", config.stderr.getvalue())
|
||||
self.assertTrue(len(mock_twistd.mock_calls), 1)
|
||||
self.assertEqual(mock_twistd.mock_calls[0][0], 'runApp')
|
||||
|
@ -661,7 +661,7 @@ starting copy, 2 files, 1 directories
|
||||
# This test ensures that tahoe will copy a file from the grid to
|
||||
# a local directory without a specified file name.
|
||||
# https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2027
|
||||
self.basedir = "cli/Cp/cp_verbose"
|
||||
self.basedir = "cli/Cp/ticket_2027"
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
# Write a test file, which we'll copy to the grid.
|
||||
|
@ -52,13 +52,8 @@ class Config(unittest.TestCase):
|
||||
create_node.write_node_config(f, opts)
|
||||
create_node.write_client_config(f, opts)
|
||||
|
||||
config = configutil.get_config(fname)
|
||||
# should succeed, no exceptions
|
||||
configutil.validate_config(
|
||||
fname,
|
||||
config,
|
||||
client._valid_config(),
|
||||
)
|
||||
client.read_config(d, "")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_client(self):
|
||||
|
@ -1,202 +0,0 @@
|
||||
import os
|
||||
from io import (
|
||||
BytesIO,
|
||||
)
|
||||
from os.path import dirname, join
|
||||
from mock import patch, Mock
|
||||
from six.moves import StringIO
|
||||
from sys import getfilesystemencoding
|
||||
from twisted.trial import unittest
|
||||
from allmydata.scripts import runner
|
||||
from allmydata.scripts.run_common import (
|
||||
identify_node_type,
|
||||
DaemonizeTahoeNodePlugin,
|
||||
MyTwistdConfig,
|
||||
)
|
||||
from allmydata.scripts.tahoe_daemonize import (
|
||||
DaemonizeOptions,
|
||||
)
|
||||
|
||||
|
||||
class Util(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.twistd_options = MyTwistdConfig()
|
||||
self.twistd_options.parseOptions(["DaemonizeTahoeNode"])
|
||||
self.options = self.twistd_options.subOptions
|
||||
|
||||
def test_node_type_nothing(self):
|
||||
tmpdir = self.mktemp()
|
||||
base = dirname(tmpdir).decode(getfilesystemencoding())
|
||||
|
||||
t = identify_node_type(base)
|
||||
|
||||
self.assertIs(None, t)
|
||||
|
||||
def test_node_type_introducer(self):
|
||||
tmpdir = self.mktemp()
|
||||
base = dirname(tmpdir).decode(getfilesystemencoding())
|
||||
with open(join(dirname(tmpdir), 'introducer.tac'), 'w') as f:
|
||||
f.write("test placeholder")
|
||||
|
||||
t = identify_node_type(base)
|
||||
|
||||
self.assertEqual(u"introducer", t)
|
||||
|
||||
def test_daemonize(self):
|
||||
tmpdir = self.mktemp()
|
||||
plug = DaemonizeTahoeNodePlugin('client', tmpdir)
|
||||
|
||||
with patch('twisted.internet.reactor') as r:
|
||||
def call(fn, *args, **kw):
|
||||
fn()
|
||||
r.stop = lambda: None
|
||||
r.callWhenRunning = call
|
||||
service = plug.makeService(self.options)
|
||||
service.parent = Mock()
|
||||
service.startService()
|
||||
|
||||
self.assertTrue(service is not None)
|
||||
|
||||
def test_daemonize_no_keygen(self):
|
||||
tmpdir = self.mktemp()
|
||||
stderr = BytesIO()
|
||||
plug = DaemonizeTahoeNodePlugin('key-generator', tmpdir)
|
||||
|
||||
with patch('twisted.internet.reactor') as r:
|
||||
def call(fn, *args, **kw):
|
||||
d = fn()
|
||||
d.addErrback(lambda _: None) # ignore the error we'll trigger
|
||||
r.callWhenRunning = call
|
||||
service = plug.makeService(self.options)
|
||||
service.stderr = stderr
|
||||
service.parent = Mock()
|
||||
# we'll raise ValueError because there's no key-generator
|
||||
# .. BUT we do this in an async function called via
|
||||
# "callWhenRunning" .. hence using a hook
|
||||
d = service.set_hook('running')
|
||||
service.startService()
|
||||
def done(f):
|
||||
self.assertIn(
|
||||
"key-generator support removed",
|
||||
stderr.getvalue(),
|
||||
)
|
||||
return None
|
||||
d.addBoth(done)
|
||||
return d
|
||||
|
||||
def test_daemonize_unknown_nodetype(self):
|
||||
tmpdir = self.mktemp()
|
||||
plug = DaemonizeTahoeNodePlugin('an-unknown-service', tmpdir)
|
||||
|
||||
with patch('twisted.internet.reactor') as r:
|
||||
def call(fn, *args, **kw):
|
||||
fn()
|
||||
r.stop = lambda: None
|
||||
r.callWhenRunning = call
|
||||
service = plug.makeService(self.options)
|
||||
service.parent = Mock()
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
service.startService()
|
||||
self.assertIn(
|
||||
"unknown nodetype",
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_daemonize_options(self):
|
||||
parent = runner.Options()
|
||||
opts = DaemonizeOptions()
|
||||
opts.parent = parent
|
||||
opts.parseArgs()
|
||||
|
||||
# just gratuitous coverage, ensureing we don't blow up on
|
||||
# these methods.
|
||||
opts.getSynopsis()
|
||||
opts.getUsage()
|
||||
|
||||
|
||||
class RunDaemonizeTests(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
# no test should change our working directory
|
||||
self._working = os.path.abspath('.')
|
||||
d = super(RunDaemonizeTests, self).setUp()
|
||||
self._reactor = patch('twisted.internet.reactor')
|
||||
self._reactor.stop = lambda: None
|
||||
self._twistd = patch('allmydata.scripts.run_common.twistd')
|
||||
self.node_dir = self.mktemp()
|
||||
os.mkdir(self.node_dir)
|
||||
for cm in [self._reactor, self._twistd]:
|
||||
cm.__enter__()
|
||||
return d
|
||||
|
||||
def tearDown(self):
|
||||
d = super(RunDaemonizeTests, self).tearDown()
|
||||
for cm in [self._reactor, self._twistd]:
|
||||
cm.__exit__(None, None, None)
|
||||
# Note: if you raise an exception (e.g. via self.assertEqual
|
||||
# or raise RuntimeError) it is apparently just ignored and the
|
||||
# test passes anyway...
|
||||
if self._working != os.path.abspath('.'):
|
||||
print("WARNING: a test just changed the working dir; putting it back")
|
||||
os.chdir(self._working)
|
||||
return d
|
||||
|
||||
def _placeholder_nodetype(self, nodetype):
|
||||
fname = join(self.node_dir, '{}.tac'.format(nodetype))
|
||||
with open(fname, 'w') as f:
|
||||
f.write("test placeholder")
|
||||
|
||||
def test_daemonize_defaults(self):
|
||||
self._placeholder_nodetype('introducer')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't much around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'daemonize',
|
||||
])
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
self.assertEqual(0, exit_code[0])
|
||||
|
||||
def test_daemonize_wrong_nodetype(self):
|
||||
self._placeholder_nodetype('invalid')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't much around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'daemonize',
|
||||
])
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
self.assertEqual(0, exit_code[0])
|
||||
|
||||
def test_daemonize_run(self):
|
||||
self._placeholder_nodetype('client')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't much around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'daemonize',
|
||||
])
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
from allmydata.scripts.tahoe_daemonize import daemonize
|
||||
daemonize(config)
|
@ -8,7 +8,9 @@ from twisted.internet import defer
|
||||
from ..common_util import run_cli
|
||||
from ..no_network import GridTestMixin
|
||||
from .common import CLITestMixin
|
||||
|
||||
from ...client import (
|
||||
read_config,
|
||||
)
|
||||
|
||||
class _FakeWormhole(object):
|
||||
|
||||
@ -81,9 +83,19 @@ class Join(GridTestMixin, CLITestMixin, unittest.TestCase):
|
||||
)
|
||||
|
||||
self.assertEqual(0, rc)
|
||||
|
||||
config = read_config(node_dir, u"")
|
||||
self.assertIn(
|
||||
"pb://foo",
|
||||
set(
|
||||
furl
|
||||
for (furl, cache)
|
||||
in config.get_introducer_configuration().values()
|
||||
),
|
||||
)
|
||||
|
||||
with open(join(node_dir, 'tahoe.cfg'), 'r') as f:
|
||||
config = f.read()
|
||||
self.assertIn("pb://foo", config)
|
||||
self.assertIn(u"somethinghopefullyunique", config)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
|
127
src/allmydata/test/cli/test_run.py
Normal file
127
src/allmydata/test/cli/test_run.py
Normal file
@ -0,0 +1,127 @@
|
||||
"""
|
||||
Tests for ``allmydata.scripts.tahoe_run``.
|
||||
"""
|
||||
|
||||
from six.moves import (
|
||||
StringIO,
|
||||
)
|
||||
|
||||
from testtools.matchers import (
|
||||
Contains,
|
||||
Equals,
|
||||
)
|
||||
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.internet.testing import (
|
||||
MemoryReactor,
|
||||
)
|
||||
from twisted.internet.test.modulehelpers import (
|
||||
AlternateReactor,
|
||||
)
|
||||
|
||||
from ...scripts.tahoe_run import (
|
||||
DaemonizeTheRealService,
|
||||
)
|
||||
|
||||
from ...scripts.runner import (
|
||||
parse_options
|
||||
)
|
||||
from ..common import (
|
||||
SyncTestCase,
|
||||
)
|
||||
|
||||
class DaemonizeTheRealServiceTests(SyncTestCase):
|
||||
"""
|
||||
Tests for ``DaemonizeTheRealService``.
|
||||
"""
|
||||
def _verify_error(self, config, expected):
|
||||
"""
|
||||
Assert that when ``DaemonizeTheRealService`` is started using the given
|
||||
configuration it writes the given message to stderr and stops the
|
||||
reactor.
|
||||
|
||||
:param bytes config: The contents of a ``tahoe.cfg`` file to give to
|
||||
the service.
|
||||
|
||||
:param bytes expected: A string to assert appears in stderr after the
|
||||
service starts.
|
||||
"""
|
||||
nodedir = FilePath(self.mktemp())
|
||||
nodedir.makedirs()
|
||||
nodedir.child("tahoe.cfg").setContent(config)
|
||||
nodedir.child("tahoe-client.tac").touch()
|
||||
|
||||
options = parse_options(["run", nodedir.path])
|
||||
stdout = options.stdout = StringIO()
|
||||
stderr = options.stderr = StringIO()
|
||||
run_options = options.subOptions
|
||||
|
||||
reactor = MemoryReactor()
|
||||
with AlternateReactor(reactor):
|
||||
service = DaemonizeTheRealService(
|
||||
"client",
|
||||
nodedir.path,
|
||||
run_options,
|
||||
)
|
||||
service.startService()
|
||||
|
||||
# We happen to know that the service uses reactor.callWhenRunning
|
||||
# to schedule all its work (though I couldn't tell you *why*).
|
||||
# Make sure those scheduled calls happen.
|
||||
waiting = reactor.whenRunningHooks[:]
|
||||
del reactor.whenRunningHooks[:]
|
||||
for f, a, k in waiting:
|
||||
f(*a, **k)
|
||||
|
||||
self.assertThat(
|
||||
reactor.hasStopped,
|
||||
Equals(True),
|
||||
)
|
||||
|
||||
self.assertThat(
|
||||
stdout.getvalue(),
|
||||
Equals(""),
|
||||
)
|
||||
|
||||
self.assertThat(
|
||||
stderr.getvalue(),
|
||||
Contains(expected),
|
||||
)
|
||||
|
||||
def test_unknown_config(self):
|
||||
"""
|
||||
If there are unknown items in the node configuration file then a short
|
||||
message introduced with ``"Configuration error:"`` is written to
|
||||
stderr.
|
||||
"""
|
||||
self._verify_error("[invalid-section]\n", "Configuration error:")
|
||||
|
||||
def test_port_assignment_required(self):
|
||||
"""
|
||||
If ``tub.port`` is configured to use port 0 then a short message rejecting
|
||||
this configuration is written to stderr.
|
||||
"""
|
||||
self._verify_error(
|
||||
"""
|
||||
[node]
|
||||
tub.port = 0
|
||||
""",
|
||||
"tub.port cannot be 0",
|
||||
)
|
||||
|
||||
def test_privacy_error(self):
|
||||
"""
|
||||
If ``reveal-IP-address`` is set to false and the tub is not configured in
|
||||
a way that avoids revealing the node's IP address, a short message
|
||||
about privacy is written to stderr.
|
||||
"""
|
||||
self._verify_error(
|
||||
"""
|
||||
[node]
|
||||
tub.port = AUTO
|
||||
reveal-IP-address = false
|
||||
""",
|
||||
"Privacy requested",
|
||||
)
|
@ -1,273 +0,0 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from os.path import join
|
||||
from mock import patch
|
||||
from six.moves import StringIO
|
||||
from functools import partial
|
||||
|
||||
from twisted.trial import unittest
|
||||
from allmydata.scripts import runner
|
||||
|
||||
|
||||
#@patch('twisted.internet.reactor')
|
||||
@patch('allmydata.scripts.tahoe_start.subprocess')
|
||||
class RunStartTests(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
d = super(RunStartTests, self).setUp()
|
||||
self.node_dir = self.mktemp()
|
||||
os.mkdir(self.node_dir)
|
||||
return d
|
||||
|
||||
def _placeholder_nodetype(self, nodetype):
|
||||
fname = join(self.node_dir, '{}.tac'.format(nodetype))
|
||||
with open(fname, 'w') as f:
|
||||
f.write("test placeholder")
|
||||
|
||||
def _pid_file(self, pid):
|
||||
fname = join(self.node_dir, 'twistd.pid')
|
||||
with open(fname, 'w') as f:
|
||||
f.write(u"{}\n".format(pid))
|
||||
|
||||
def _logs(self, logs):
|
||||
os.mkdir(join(self.node_dir, 'logs'))
|
||||
fname = join(self.node_dir, 'logs', 'twistd.log')
|
||||
with open(fname, 'w') as f:
|
||||
f.write(logs)
|
||||
|
||||
def test_start_defaults(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
self._pid_file(1234)
|
||||
self._logs('one log\ntwo log\nred log\nblue log\n')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
try:
|
||||
with patch('allmydata.scripts.tahoe_start.os'):
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
def launch(*args, **kw):
|
||||
with open(join(self.node_dir, 'logs', 'twistd.log'), 'a') as f:
|
||||
f.write('client running\n') # "the magic"
|
||||
_subprocess.check_call = launch
|
||||
runner.dispatch(config, i, o, e)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self.assertEqual([0], exit_code)
|
||||
self.assertTrue('Node has started' in o.getvalue())
|
||||
|
||||
def test_start_fails(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
self._logs('existing log line\n')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.tahoe_start.time') as t:
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
thetime = [0]
|
||||
def _time():
|
||||
thetime[0] += 0.1
|
||||
return thetime[0]
|
||||
t.time = _time
|
||||
|
||||
def launch(*args, **kw):
|
||||
with open(join(self.node_dir, 'logs', 'twistd.log'), 'a') as f:
|
||||
f.write('a new log line\n')
|
||||
_subprocess.check_call = launch
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertTrue("a new log line" in o.getvalue())
|
||||
self.assertEqual([1], exit_code)
|
||||
|
||||
def test_start_subprocess_fails(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
self._logs('existing log line\n')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.tahoe_start.time'):
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
# undo patch for the exception-class
|
||||
_subprocess.CalledProcessError = subprocess.CalledProcessError
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
def launch(*args, **kw):
|
||||
raise subprocess.CalledProcessError(42, "tahoe")
|
||||
_subprocess.check_call = launch
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should get our "odd" error-code
|
||||
self.assertEqual([42], exit_code)
|
||||
|
||||
def test_start_help(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
|
||||
std = StringIO()
|
||||
with patch('sys.stdout') as stdo:
|
||||
stdo.write = std.write
|
||||
try:
|
||||
runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
'--help',
|
||||
], stdout=std)
|
||||
self.fail("Should get exit")
|
||||
except SystemExit as e:
|
||||
print(e)
|
||||
|
||||
self.assertIn(
|
||||
"Usage:",
|
||||
std.getvalue()
|
||||
)
|
||||
|
||||
def test_start_unknown_node_type(self, _subprocess):
|
||||
self._placeholder_nodetype('bogus')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertIn(
|
||||
"is not a recognizable node directory",
|
||||
e.getvalue()
|
||||
)
|
||||
self.assertEqual([1], exit_code)
|
||||
|
||||
def test_start_nodedir_not_dir(self, _subprocess):
|
||||
shutil.rmtree(self.node_dir)
|
||||
assert not os.path.isdir(self.node_dir)
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertIn(
|
||||
"does not look like a directory at all",
|
||||
e.getvalue()
|
||||
)
|
||||
self.assertEqual([1], exit_code)
|
||||
|
||||
|
||||
class RunTests(unittest.TestCase):
|
||||
"""
|
||||
Tests confirming end-user behavior of CLI commands
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
d = super(RunTests, self).setUp()
|
||||
self.addCleanup(partial(os.chdir, os.getcwd()))
|
||||
self.node_dir = self.mktemp()
|
||||
os.mkdir(self.node_dir)
|
||||
return d
|
||||
|
||||
@patch('twisted.internet.reactor')
|
||||
def test_run_invalid_config(self, reactor):
|
||||
"""
|
||||
Configuration that's invalid should be obvious to the user
|
||||
"""
|
||||
|
||||
def cwr(fn, *args, **kw):
|
||||
fn()
|
||||
|
||||
def stop(*args, **kw):
|
||||
stopped.append(None)
|
||||
stopped = []
|
||||
reactor.callWhenRunning = cwr
|
||||
reactor.stop = stop
|
||||
|
||||
with open(os.path.join(self.node_dir, "client.tac"), "w") as f:
|
||||
f.write('test')
|
||||
|
||||
with open(os.path.join(self.node_dir, "tahoe.cfg"), "w") as f:
|
||||
f.write(
|
||||
"[invalid section]\n"
|
||||
"foo = bar\n"
|
||||
)
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'run',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
d = runner.dispatch(config, i, o, e)
|
||||
|
||||
self.assertFailure(d, SystemExit)
|
||||
|
||||
output = e.getvalue()
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertIn(
|
||||
"invalid section",
|
||||
output,
|
||||
)
|
||||
self.assertIn(
|
||||
"Configuration error:",
|
||||
output,
|
||||
)
|
||||
# ensure reactor.stop was actually called
|
||||
self.assertEqual([None], stopped)
|
||||
return d
|
@ -5,7 +5,6 @@ __all__ = [
|
||||
"on_stdout",
|
||||
"on_stdout_and_stderr",
|
||||
"on_different",
|
||||
"wait_for_exit",
|
||||
]
|
||||
|
||||
import os
|
||||
@ -14,8 +13,11 @@ from errno import ENOENT
|
||||
|
||||
import attr
|
||||
|
||||
from eliot import (
|
||||
log_call,
|
||||
)
|
||||
|
||||
from twisted.internet.error import (
|
||||
ProcessDone,
|
||||
ProcessTerminated,
|
||||
ProcessExitedAlready,
|
||||
)
|
||||
@ -25,9 +27,6 @@ from twisted.internet.interfaces import (
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.python.runtime import (
|
||||
platform,
|
||||
)
|
||||
from twisted.internet.protocol import (
|
||||
Protocol,
|
||||
ProcessProtocol,
|
||||
@ -42,11 +41,9 @@ from twisted.internet.task import (
|
||||
from ..client import (
|
||||
_Client,
|
||||
)
|
||||
from ..scripts.tahoe_stop import (
|
||||
COULD_NOT_STOP,
|
||||
)
|
||||
from ..util.eliotutil import (
|
||||
inline_callbacks,
|
||||
log_call_deferred,
|
||||
)
|
||||
|
||||
class Expect(Protocol, object):
|
||||
@ -156,6 +153,7 @@ class CLINodeAPI(object):
|
||||
env=os.environ,
|
||||
)
|
||||
|
||||
@log_call(action_type="test:cli-api:run", include_args=["extra_tahoe_args"])
|
||||
def run(self, protocol, extra_tahoe_args=()):
|
||||
"""
|
||||
Start the node running.
|
||||
@ -176,28 +174,21 @@ class CLINodeAPI(object):
|
||||
if ENOENT != e.errno:
|
||||
raise
|
||||
|
||||
def stop(self, protocol):
|
||||
self._execute(
|
||||
protocol,
|
||||
[u"stop", self.basedir.asTextMode().path],
|
||||
)
|
||||
@log_call_deferred(action_type="test:cli-api:stop")
|
||||
def stop(self):
|
||||
return self.stop_and_wait()
|
||||
|
||||
@log_call_deferred(action_type="test:cli-api:stop-and-wait")
|
||||
@inline_callbacks
|
||||
def stop_and_wait(self):
|
||||
if platform.isWindows():
|
||||
# On Windows there is no PID file and no "tahoe stop".
|
||||
if self.process is not None:
|
||||
while True:
|
||||
try:
|
||||
self.process.signalProcess("TERM")
|
||||
except ProcessExitedAlready:
|
||||
break
|
||||
else:
|
||||
yield deferLater(self.reactor, 0.1, lambda: None)
|
||||
else:
|
||||
protocol, ended = wait_for_exit()
|
||||
self.stop(protocol)
|
||||
yield ended
|
||||
if self.process is not None:
|
||||
while True:
|
||||
try:
|
||||
self.process.signalProcess("TERM")
|
||||
except ProcessExitedAlready:
|
||||
break
|
||||
else:
|
||||
yield deferLater(self.reactor, 0.1, lambda: None)
|
||||
|
||||
def active(self):
|
||||
# By writing this file, we get two minutes before the client will
|
||||
@ -208,28 +199,9 @@ class CLINodeAPI(object):
|
||||
def _check_cleanup_reason(self, reason):
|
||||
# Let it fail because the process has already exited.
|
||||
reason.trap(ProcessTerminated)
|
||||
if reason.value.exitCode != COULD_NOT_STOP:
|
||||
return reason
|
||||
return None
|
||||
|
||||
def cleanup(self):
|
||||
stopping = self.stop_and_wait()
|
||||
stopping.addErrback(self._check_cleanup_reason)
|
||||
return stopping
|
||||
|
||||
|
||||
class _WaitForEnd(ProcessProtocol, object):
|
||||
def __init__(self, ended):
|
||||
self._ended = ended
|
||||
|
||||
def processEnded(self, reason):
|
||||
if reason.check(ProcessDone):
|
||||
self._ended.callback(None)
|
||||
else:
|
||||
self._ended.errback(reason)
|
||||
|
||||
|
||||
def wait_for_exit():
|
||||
ended = Deferred()
|
||||
protocol = _WaitForEnd(ended)
|
||||
return protocol, ended
|
||||
|
@ -11,6 +11,8 @@ __all__ = [
|
||||
"skipIf",
|
||||
]
|
||||
|
||||
from past.builtins import chr as byteschr
|
||||
|
||||
import os, random, struct
|
||||
import six
|
||||
import tempfile
|
||||
@ -62,10 +64,16 @@ from twisted.internet.endpoints import AdoptedStreamServerEndpoint
|
||||
from twisted.trial.unittest import TestCase as _TrialTestCase
|
||||
|
||||
from allmydata import uri
|
||||
from allmydata.interfaces import IMutableFileNode, IImmutableFileNode,\
|
||||
NotEnoughSharesError, ICheckable, \
|
||||
IMutableUploadable, SDMF_VERSION, \
|
||||
MDMF_VERSION
|
||||
from allmydata.interfaces import (
|
||||
IMutableFileNode,
|
||||
IImmutableFileNode,
|
||||
NotEnoughSharesError,
|
||||
ICheckable,
|
||||
IMutableUploadable,
|
||||
SDMF_VERSION,
|
||||
MDMF_VERSION,
|
||||
IAddressFamily,
|
||||
)
|
||||
from allmydata.check_results import CheckResults, CheckAndRepairResults, \
|
||||
DeepCheckResults, DeepCheckAndRepairResults
|
||||
from allmydata.storage_client import StubServer
|
||||
@ -81,6 +89,9 @@ from allmydata.client import (
|
||||
config_from_string,
|
||||
create_client_from_config,
|
||||
)
|
||||
from allmydata.scripts.common import (
|
||||
write_introducer,
|
||||
)
|
||||
|
||||
from ..crypto import (
|
||||
ed25519,
|
||||
@ -211,7 +222,7 @@ class UseNode(object):
|
||||
|
||||
:ivar FilePath basedir: The base directory of the node.
|
||||
|
||||
:ivar bytes introducer_furl: The introducer furl with which to
|
||||
:ivar str introducer_furl: The introducer furl with which to
|
||||
configure the client.
|
||||
|
||||
:ivar dict[bytes, bytes] node_config: Configuration items for the *node*
|
||||
@ -221,8 +232,9 @@ class UseNode(object):
|
||||
"""
|
||||
plugin_config = attr.ib()
|
||||
storage_plugin = attr.ib()
|
||||
basedir = attr.ib()
|
||||
introducer_furl = attr.ib()
|
||||
basedir = attr.ib(validator=attr.validators.instance_of(FilePath))
|
||||
introducer_furl = attr.ib(validator=attr.validators.instance_of(str),
|
||||
converter=six.ensure_str)
|
||||
node_config = attr.ib(default=attr.Factory(dict))
|
||||
|
||||
config = attr.ib(default=None)
|
||||
@ -246,6 +258,11 @@ class UseNode(object):
|
||||
config=format_config_items(self.plugin_config),
|
||||
)
|
||||
|
||||
write_introducer(
|
||||
self.basedir,
|
||||
"default",
|
||||
self.introducer_furl,
|
||||
)
|
||||
self.config = config_from_string(
|
||||
self.basedir.asTextMode().path,
|
||||
"tub.port",
|
||||
@ -254,11 +271,9 @@ class UseNode(object):
|
||||
{node_config}
|
||||
|
||||
[client]
|
||||
introducer.furl = {furl}
|
||||
storage.plugins = {storage_plugin}
|
||||
{plugin_config_section}
|
||||
""".format(
|
||||
furl=self.introducer_furl,
|
||||
storage_plugin=self.storage_plugin,
|
||||
node_config=format_config_items(self.node_config),
|
||||
plugin_config_section=plugin_config_section,
|
||||
@ -1050,7 +1065,7 @@ def _corrupt_share_data_last_byte(data, debug=False):
|
||||
sharedatasize = struct.unpack(">Q", data[0x0c+0x08:0x0c+0x0c+8])[0]
|
||||
offset = 0x0c+0x44+sharedatasize-1
|
||||
|
||||
newdata = data[:offset] + chr(ord(data[offset])^0xFF) + data[offset+1:]
|
||||
newdata = data[:offset] + byteschr(ord(data[offset:offset+1])^0xFF) + data[offset+1:]
|
||||
if debug:
|
||||
log.msg("testing: flipping all bits of byte at offset %d: %r, newdata: %r" % (offset, data[offset], newdata[offset]))
|
||||
return newdata
|
||||
@ -1078,7 +1093,7 @@ def _corrupt_crypttext_hash_tree_byte_x221(data, debug=False):
|
||||
assert sharevernum in (1, 2), "This test is designed to corrupt immutable shares of v1 or v2 in specific ways."
|
||||
if debug:
|
||||
log.msg("original data: %r" % (data,))
|
||||
return data[:0x0c+0x221] + chr(ord(data[0x0c+0x221])^0x02) + data[0x0c+0x2210+1:]
|
||||
return data[:0x0c+0x221] + byteschr(ord(data[0x0c+0x221:0x0c+0x221+1])^0x02) + data[0x0c+0x2210+1:]
|
||||
|
||||
def _corrupt_block_hashes(data, debug=False):
|
||||
"""Scramble the file data -- the field containing the block hash tree
|
||||
@ -1138,6 +1153,28 @@ def _corrupt_uri_extension(data, debug=False):
|
||||
return corrupt_field(data, 0x0c+uriextoffset, uriextlen)
|
||||
|
||||
|
||||
|
||||
@attr.s
|
||||
@implementer(IAddressFamily)
|
||||
class ConstantAddresses(object):
|
||||
"""
|
||||
Pretend to provide support for some address family but just hand out
|
||||
canned responses.
|
||||
"""
|
||||
_listener = attr.ib(default=None)
|
||||
_handler = attr.ib(default=None)
|
||||
|
||||
def get_listener(self):
|
||||
if self._listener is None:
|
||||
raise Exception("{!r} has no listener.")
|
||||
return self._listener
|
||||
|
||||
def get_client_endpoint(self):
|
||||
if self._handler is None:
|
||||
raise Exception("{!r} has no client endpoint.")
|
||||
return self._handler
|
||||
|
||||
|
||||
class _TestCaseMixin(object):
|
||||
"""
|
||||
A mixin for ``TestCase`` which collects helpful behaviors for subclasses.
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user