Merge branch 'master' into 2916.grid-manager-proposal.5

Conflicts:
	src/allmydata/storage_client.py
	src/allmydata/test/common_util.py
This commit is contained in:
meejah 2020-12-14 13:44:42 -07:00
commit bf799c1cf8
48 changed files with 739 additions and 890 deletions

View File

@ -203,3 +203,7 @@ N: meejah
E: meejah@meejah.ca E: meejah@meejah.ca
P: 0xC2602803128069A7, 9D5A 2BD5 688E CB88 9DEB CD3F C260 2803 1280 69A7 P: 0xC2602803128069A7, 9D5A 2BD5 688E CB88 9DEB CD3F C260 2803 1280 69A7
D: various bug-fixes and features D: various bug-fixes and features
N: Viktoriia Savchuk
W: https://twitter.com/viktoriiasvchk
D: Developer community focused improvements on the README file.

View File

@ -1,97 +1,119 @@
========== ======================================
Tahoe-LAFS Free and Open decentralized data store
========== ======================================
Tahoe-LAFS is a Free and Open decentralized cloud storage system. It |image0|
distributes your data across multiple servers. Even if some of the servers
fail or are taken over by an attacker, the entire file store continues to
function correctly, preserving your privacy and security.
For full documentation, please see `Tahoe-LAFS <https://www.tahoe-lafs.org>`__ (Tahoe Least-Authority File Store) is the first free software / open-source storage technology that distributes your data across multiple servers. Even if some servers fail or are taken over by an attacker, the entire file store continues to function correctly, preserving your privacy and security.
http://tahoe-lafs.readthedocs.io/en/latest/ .
|Contributor Covenant| |readthedocs| |travis| |circleci| |codecov| |Contributor Covenant| |readthedocs| |travis| |circleci| |codecov|
INSTALLING Table of contents
==========
There are three ways to install Tahoe-LAFS. - `About Tahoe-LAFS <#about-tahoe-lafs>`__
using OS packages - `Installation <#installation>`__
^^^^^^^^^^^^^^^^^
Pre-packaged versions are available for several operating systems: - `Issues <#issues>`__
* Debian and Ubuntu users can ``apt-get install tahoe-lafs`` - `Documentation <#documentation>`__
* NixOS, NetBSD (pkgsrc), ArchLinux, Slackware, and Gentoo have packages
available, see `OSPackages`_ for details
* `Mac`_ and Windows installers are in development.
via pip - `Community <#community>`__
^^^^^^^
If you don't use an OS package, you'll need Python 2.7 and `pip`_. You may - `Contributing <#contributing>`__
also need a C compiler, and the development headers for python, libffi, and
OpenSSL. On a Debian-like system, use ``apt-get install build-essential
python-dev libffi-dev libssl-dev python-virtualenv``. On Windows, see
`<docs/windows.rst>`_.
Then, to install the most recent release, just run: - `FAQ <#faq>`__
* ``pip install tahoe-lafs`` - `License <#license>`__
from source 💡 About Tahoe-LAFS
^^^^^^^^^^^ -------------------
To install from source (either so you can hack on it, or just to run
pre-release code), you should create a virtualenv and install into that:
* ``git clone https://github.com/tahoe-lafs/tahoe-lafs.git`` Tahoe-LAFS helps you to store files while granting confidentiality, integrity, and availability of your data.
* ``cd tahoe-lafs``
* ``virtualenv --python=python2.7 venv``
* ``venv/bin/pip install --upgrade setuptools``
* ``venv/bin/pip install --editable .``
* ``venv/bin/tahoe --version``
To run the unit test suite: How does it work? You run a client program on your computer, which talks to one or more storage servers on other computers. When you tell your client to store a file, it will encrypt that file, encode it into multiple pieces, then spread those pieces out among various servers. The pieces are all encrypted and protected against modifications. Later, when you ask your client to retrieve the file, it will find the necessary pieces, make sure they havent been corrupted, reassemble them, and decrypt the result.
* ``tox`` | |image2|
| *The image is taken from meejah's* \ `blog <https://blog.torproject.org/tor-heart-tahoe-lafs>`__ \ *post at Torproject.org.*
You can pass arguments to ``trial`` with an environment variable. For |
example, you can run the test suite on multiple cores to speed it up:
* ``TAHOE_LAFS_TRIAL_ARGS="-j4" tox`` The client creates pieces (“shares”) that have a configurable amount of redundancy, so even if some servers fail, you can still get your data back. Corrupt shares are detected and ignored so that the system can tolerate server-side hard-drive errors. All files are encrypted (with a unique key) before uploading, so even a malicious server operator cannot read your data. The only thing you ask of the servers is that they can (usually) provide the shares when you ask for them: you arent relying upon them for confidentiality, integrity, or absolute availability.
For more detailed instructions, read `<docs/INSTALL.rst>`_ . Tahoe-LAFS was first designed in 2007, following the "principle of least authority", a security best practice requiring system components to only have the privilege necessary to complete their intended function and not more.
Once ``tahoe --version`` works, see `<docs/running.rst>`_ to learn how to set Please read more about Tahoe-LAFS architecture `here <docs/architecture.rst>`__.
up your first Tahoe-LAFS node.
LICENCE ✅ Installation
======= ---------------
Copyright 2006-2018 The Tahoe-LAFS Software Foundation For more detailed instructions, read `docs/INSTALL.rst <docs/INSTALL.rst>`__ .
You may use this package under the GNU General Public License, version 2 or, - `Building Tahoe-LAFS on Windows <docs/windows.rst>`__
at your option, any later version. You may use this package under the
Transitive Grace Period Public Licence, version 1.0, or at your option, any
later version. (You may choose to use this package under the terms of either
licence, at your option.) See the file `COPYING.GPL`_ for the terms of the
GNU General Public License, version 2. See the file `COPYING.TGPPL`_ for
the terms of the Transitive Grace Period Public Licence, version 1.0.
See `TGPPL.PDF`_ for why the TGPPL exists, graphically illustrated on three - `OS-X Packaging <docs/OS-X.rst>`__
slides.
.. _OSPackages: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OSPackages Once tahoe --version works, see `docs/running.rst <docs/running.rst>`__ to learn how to set up your first Tahoe-LAFS node.
.. _Mac: docs/OS-X.rst
.. _pip: https://pip.pypa.io/en/stable/installing/
.. _COPYING.GPL: https://github.com/tahoe-lafs/tahoe-lafs/blob/master/COPYING.GPL
.. _COPYING.TGPPL: https://github.com/tahoe-lafs/tahoe-lafs/blob/master/COPYING.TGPPL.rst
.. _TGPPL.PDF: https://tahoe-lafs.org/~zooko/tgppl.pdf
----
🤖 Issues
---------
Tahoe-LAFS uses the Trac instance to track `issues <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/ViewTickets>`__. Please email jean-paul plus tahoe-lafs at leastauthority dot com for an account.
📑 Documentation
----------------
You can find the full Tahoe-LAFS documentation at our `documentation site <http://tahoe-lafs.readthedocs.io/en/latest/>`__.
💬 Community
------------
Get involved with the Tahoe-LAFS community:
- Chat with Tahoe-LAFS developers at #tahoe-lafs chat on irc.freenode.net or `Slack <https://join.slack.com/t/tahoe-lafs/shared_invite/zt-jqfj12r5-ZZ5z3RvHnubKVADpP~JINQ>`__.
- Join our `weekly conference calls <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/WeeklyMeeting>`__ with core developers and interested community members.
- Subscribe to `the tahoe-dev mailing list <https://www.tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev>`__, the community forum for discussion of Tahoe-LAFS design, implementation, and usage.
🤗 Contributing
---------------
As a community-driven open source project, Tahoe-LAFS welcomes contributions of any form:
- `Code patches <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Patches>`__
- `Documentation improvements <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Doc>`__
- `Bug reports <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/HowToReportABug>`__
- `Patch reviews <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/PatchReviewProcess>`__
Before authoring or reviewing a patch, please familiarize yourself with the `Coding Standard <https://tahoe-lafs.org/trac/tahoe-lafs/wiki/CodingStandards>`__ and the `Contributor Code of Conduct <docs/CODE_OF_CONDUCT.md>`__.
❓ FAQ
------
Need more information? Please check our `FAQ page <https://www.tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ>`__.
📄 License
----------
Copyright 2006-2020 The Tahoe-LAFS Software Foundation
You may use this package under the GNU General Public License, version 2 or, at your option, any later version. You may use this package under the Transitive Grace Period Public Licence, version 1.0, or at your choice, any later version. (You may choose to use this package under the terms of either license, at your option.) See the file `COPYING.GPL <COPYING.GPL>`__ for the terms of the GNU General Public License, version 2. See the file `COPYING.TGPPL <COPYING.TGPPL.rst>`__ for the terms of the Transitive Grace Period Public Licence, version 1.0.
See `TGPPL.PDF <https://tahoe-lafs.org/~zooko/tgppl.pdf>`__ for why the TGPPL exists, graphically illustrated on three slides.
.. |image0| image:: docs/_static/media/image2.png
:width: 3in
:height: 0.91667in
.. |image2| image:: docs/_static/media/image1.png
:width: 6.9252in
:height: 2.73611in
.. |readthedocs| image:: http://readthedocs.org/projects/tahoe-lafs/badge/?version=latest .. |readthedocs| image:: http://readthedocs.org/projects/tahoe-lafs/badge/?version=latest
:alt: documentation status :alt: documentation status
:target: http://tahoe-lafs.readthedocs.io/en/latest/?badge=latest :target: http://tahoe-lafs.readthedocs.io/en/latest/?badge=latest

BIN
docs/_static/media/image1.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

BIN
docs/_static/media/image2.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.3 KiB

View File

@ -75,7 +75,7 @@ The item descriptions below use the following types:
Node Types Node Types
========== ==========
A node can be a client/server, an introducer, or a statistics gatherer. A node can be a client/server or an introducer.
Client/server nodes provide one or more of the following services: Client/server nodes provide one or more of the following services:
@ -593,11 +593,6 @@ Client Configuration
If provided, the node will attempt to connect to and use the given helper If provided, the node will attempt to connect to and use the given helper
for uploads. See :doc:`helper` for details. for uploads. See :doc:`helper` for details.
``stats_gatherer.furl = (FURL string, optional)``
If provided, the node will connect to the given stats gatherer and
provide it with operational statistics.
``shares.needed = (int, optional) aka "k", default 3`` ``shares.needed = (int, optional) aka "k", default 3``
``shares.total = (int, optional) aka "N", N >= k, default 10`` ``shares.total = (int, optional) aka "N", N >= k, default 10``
@ -911,11 +906,6 @@ This section describes these other files.
This file is used to construct an introducer, and is created by the This file is used to construct an introducer, and is created by the
"``tahoe create-introducer``" command. "``tahoe create-introducer``" command.
``tahoe-stats-gatherer.tac``
This file is used to construct a statistics gatherer, and is created by the
"``tahoe create-stats-gatherer``" command.
``private/control.furl`` ``private/control.furl``
This file contains a FURL that provides access to a control port on the This file contains a FURL that provides access to a control port on the

View File

@ -23,7 +23,7 @@ Config setting File Comment
``BASEDIR/introducer.furl`` ``BASEDIR/private/introducers.yaml`` ``BASEDIR/introducer.furl`` ``BASEDIR/private/introducers.yaml``
``[client]helper.furl`` ``BASEDIR/helper.furl`` ``[client]helper.furl`` ``BASEDIR/helper.furl``
``[client]key_generator.furl`` ``BASEDIR/key_generator.furl`` ``[client]key_generator.furl`` ``BASEDIR/key_generator.furl``
``[client]stats_gatherer.furl`` ``BASEDIR/stats_gatherer.furl`` ``BASEDIR/stats_gatherer.furl`` Stats gatherer has been removed.
``[storage]enabled`` ``BASEDIR/no_storage`` (``False`` if ``no_storage`` exists) ``[storage]enabled`` ``BASEDIR/no_storage`` (``False`` if ``no_storage`` exists)
``[storage]readonly`` ``BASEDIR/readonly_storage`` (``True`` if ``readonly_storage`` exists) ``[storage]readonly`` ``BASEDIR/readonly_storage`` (``True`` if ``readonly_storage`` exists)
``[storage]sizelimit`` ``BASEDIR/sizelimit`` ``[storage]sizelimit`` ``BASEDIR/sizelimit``
@ -47,3 +47,10 @@ the now (since Tahoe-LAFS v1.3.0) unsupported
addresses specified in ``advertised_ip_addresses`` were used in addresses specified in ``advertised_ip_addresses`` were used in
addition to any that were automatically discovered), whereas the new addition to any that were automatically discovered), whereas the new
``tahoe.cfg`` directive is not (``tub.location`` is used verbatim). ``tahoe.cfg`` directive is not (``tub.location`` is used verbatim).
The stats gatherer has been broken at least since Tahoe-LAFS v1.13.0.
The (broken) functionality of ``[client]stats_gatherer.furl`` (which
was previously in ``BASEDIR/stats_gatherer.furl``), is scheduled to be
completely removed after Tahoe-LAFS v1.15.0. After that point, if
your configuration contains a ``[client]stats_gatherer.furl``, your
node will refuse to start.

View File

@ -45,9 +45,6 @@ Create a client node (with storage initially disabled).
.TP .TP
.B \f[B]create-introducer\f[] .B \f[B]create-introducer\f[]
Create an introducer node. Create an introducer node.
.TP
.B \f[B]create-stats-gatherer\f[]
Create a stats-gatherer service.
.SS OPTIONS .SS OPTIONS
.TP .TP
.B \f[B]-C,\ --basedir=\f[] .B \f[B]-C,\ --basedir=\f[]

View File

@ -6,8 +6,7 @@ Tahoe Statistics
1. `Overview`_ 1. `Overview`_
2. `Statistics Categories`_ 2. `Statistics Categories`_
3. `Running a Tahoe Stats-Gatherer Service`_ 3. `Using Munin To Graph Stats Values`_
4. `Using Munin To Graph Stats Values`_
Overview Overview
======== ========
@ -243,92 +242,6 @@ The currently available stats (as of release 1.6.0 or so) are described here:
the process was started. Ticket #472 indicates that .total may the process was started. Ticket #472 indicates that .total may
sometimes be negative due to wraparound of the kernel's counter. sometimes be negative due to wraparound of the kernel's counter.
**stats.load_monitor.\***
When enabled, the "load monitor" continually schedules a one-second
callback, and measures how late the response is. This estimates system load
(if the system is idle, the response should be on time). This is only
enabled if a stats-gatherer is configured.
avg_load
average "load" value (seconds late) over the last minute
max_load
maximum "load" value over the last minute
Running a Tahoe Stats-Gatherer Service
======================================
The "stats-gatherer" is a simple daemon that periodically collects stats from
several tahoe nodes. It could be useful, e.g., in a production environment,
where you want to monitor dozens of storage servers from a central management
host. It merely gatherers statistics from many nodes into a single place: it
does not do any actual analysis.
The stats gatherer listens on a network port using the same Foolscap_
connection library that Tahoe clients use to connect to storage servers.
Tahoe nodes can be configured to connect to the stats gatherer and publish
their stats on a periodic basis. (In fact, what happens is that nodes connect
to the gatherer and offer it a second FURL which points back to the node's
"stats port", which the gatherer then uses to pull stats on a periodic basis.
The initial connection is flipped to allow the nodes to live behind NAT
boxes, as long as the stats-gatherer has a reachable IP address.)
.. _Foolscap: https://foolscap.lothar.com/trac
The stats-gatherer is created in the same fashion as regular tahoe client
nodes and introducer nodes. Choose a base directory for the gatherer to live
in (but do not create the directory). Choose the hostname that should be
advertised in the gatherer's FURL. Then run:
::
tahoe create-stats-gatherer --hostname=HOSTNAME $BASEDIR
and start it with "tahoe start $BASEDIR". Once running, the gatherer will
write a FURL into $BASEDIR/stats_gatherer.furl .
To configure a Tahoe client/server node to contact the stats gatherer, copy
this FURL into the node's tahoe.cfg file, in a section named "[client]",
under a key named "stats_gatherer.furl", like so:
::
[client]
stats_gatherer.furl = pb://qbo4ktl667zmtiuou6lwbjryli2brv6t@HOSTNAME:PORTNUM/wxycb4kaexzskubjnauxeoptympyf45y
or simply copy the stats_gatherer.furl file into the node's base directory
(next to the tahoe.cfg file): it will be interpreted in the same way.
When the gatherer is created, it will allocate a random unused TCP port, so
it should not conflict with anything else that you have running on that host
at that time. To explicitly control which port it uses, run the creation
command with ``--location=`` and ``--port=`` instead of ``--hostname=``. If
you use a hostname of ``example.org`` and a port number of ``1234``, then
run::
tahoe create-stats-gatherer --location=tcp:example.org:1234 --port=tcp:1234
``--location=`` is a Foolscap FURL hints string (so it can be a
comma-separated list of connection hints), and ``--port=`` is a Twisted
"server endpoint specification string", as described in :doc:`configuration`.
Once running, the stats gatherer will create a standard JSON file in
``$BASEDIR/stats.json``. Once a minute, the gatherer will pull stats
information from every connected node and write them into the file. The file
will contain a dictionary, in which node identifiers (known as "tubid"
strings) are the keys, and the values are a dict with 'timestamp',
'nickname', and 'stats' keys. d[tubid][stats] will contain the stats
dictionary as made available at http://localhost:3456/statistics?t=json . The
file will only contain the most recent update from each node.
Other tools can be built to examine these stats and render them into
something useful. For example, a tool could sum the
"storage_server.disk_avail' values from all servers to compute a
total-disk-available number for the entire grid (however, the "disk watcher"
daemon, in misc/operations_helpers/spacetime/, is better suited for this
specific task).
Using Munin To Graph Stats Values Using Munin To Graph Stats Values
================================= =================================

0
newsfragments/3521.minor Normal file
View File

0
newsfragments/3522.minor Normal file
View File

0
newsfragments/3544.minor Normal file
View File

1
newsfragments/3545.other Normal file
View File

@ -0,0 +1 @@
The README, revised by Viktoriia with feedback from the team, is now more focused on the developer community and provides more information about Tahoe-LAFS, why it's important, and how someone can use it or start contributing to it.

0
newsfragments/3546.minor Normal file
View File

View File

@ -0,0 +1 @@
The stats gatherer, broken since at least Tahoe-LAFS 1.13.0, has been removed. The ``[client]stats_gatherer.furl`` configuration item in ``tahoe.cfg`` is no longer allowed. The Tahoe-LAFS project recommends using a third-party metrics aggregation tool instead.

0
newsfragments/3551.minor Normal file
View File

0
newsfragments/3555.minor Normal file
View File

View File

@ -7,7 +7,6 @@ import weakref
from base64 import urlsafe_b64encode from base64 import urlsafe_b64encode
from functools import partial from functools import partial
# On Python 2 this will be the backported package: # On Python 2 this will be the backported package:
from configparser import NoSectionError from configparser import NoSectionError
@ -89,7 +88,6 @@ _client_config = configutil.ValidConfiguration(
"shares.happy", "shares.happy",
"shares.needed", "shares.needed",
"shares.total", "shares.total",
"stats_gatherer.furl",
"storage.plugins", "storage.plugins",
), ),
"grid_managers": None, # means "any options valid" "grid_managers": None, # means "any options valid"
@ -693,11 +691,7 @@ class _Client(node.Node, pollmixin.PollMixin):
self.init_web(webport) # strports string self.init_web(webport) # strports string
def init_stats_provider(self): def init_stats_provider(self):
gatherer_furl = self.config.get_config("client", "stats_gatherer.furl", None) self.stats_provider = StatsProvider(self)
if gatherer_furl:
# FURLs should be bytes:
gatherer_furl = gatherer_furl.encode("utf-8")
self.stats_provider = StatsProvider(self, gatherer_furl)
self.stats_provider.setServiceParent(self) self.stats_provider.setServiceParent(self)
self.stats_provider.register_producer(self) self.stats_provider.register_producer(self)

View File

@ -1,3 +1,15 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from zope.interface import implementer from zope.interface import implementer
from twisted.internet import defer from twisted.internet import defer
from foolscap.api import DeadReferenceError, RemoteException from foolscap.api import DeadReferenceError, RemoteException

View File

@ -1,3 +1,15 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from zope.interface import implementer from zope.interface import implementer
from twisted.internet import defer from twisted.internet import defer
from allmydata.storage.server import si_b2a from allmydata.storage.server import si_b2a

View File

@ -521,7 +521,6 @@ class IStorageBroker(Interface):
oldest_supported: the peer's oldest supported version, same oldest_supported: the peer's oldest supported version, same
rref: the RemoteReference, if connected, otherwise None rref: the RemoteReference, if connected, otherwise None
remote_host: the IAddress, if connected, otherwise None
This method is intended for monitoring interfaces, such as a web page This method is intended for monitoring interfaces, such as a web page
that describes connecting and connected peers. that describes connecting and connected peers.
@ -2936,38 +2935,6 @@ class RIHelper(RemoteInterface):
return (UploadResults, ChoiceOf(RICHKUploadHelper, None)) return (UploadResults, ChoiceOf(RICHKUploadHelper, None))
class RIStatsProvider(RemoteInterface):
__remote_name__ = native_str("RIStatsProvider.tahoe.allmydata.com")
"""
Provides access to statistics and monitoring information.
"""
def get_stats():
"""
returns a dictionary containing 'counters' and 'stats', each a
dictionary with string counter/stat name keys, and numeric or None values.
counters are monotonically increasing measures of work done, and
stats are instantaneous measures (potentially time averaged
internally)
"""
return DictOf(bytes, DictOf(bytes, ChoiceOf(float, int, long, None)))
class RIStatsGatherer(RemoteInterface):
__remote_name__ = native_str("RIStatsGatherer.tahoe.allmydata.com")
"""
Provides a monitoring service for centralised collection of stats
"""
def provide(provider=RIStatsProvider, nickname=bytes):
"""
@param provider: a stats collector instance that should be polled
periodically by the gatherer to collect stats.
@param nickname: a name useful to identify the provided client
"""
return None
class IStatsProducer(Interface): class IStatsProducer(Interface):
def get_stats(): def get_stats():
""" """

View File

@ -1,4 +1,16 @@
from past.builtins import unicode, long """
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from past.builtins import long
from six import ensure_text from six import ensure_text
import time import time
@ -27,11 +39,11 @@ class IntroducerClient(service.Service, Referenceable):
nickname, my_version, oldest_supported, nickname, my_version, oldest_supported,
sequencer, cache_filepath): sequencer, cache_filepath):
self._tub = tub self._tub = tub
if isinstance(introducer_furl, unicode): if isinstance(introducer_furl, str):
introducer_furl = introducer_furl.encode("utf-8") introducer_furl = introducer_furl.encode("utf-8")
self.introducer_furl = introducer_furl self.introducer_furl = introducer_furl
assert type(nickname) is unicode assert isinstance(nickname, str)
self._nickname = nickname self._nickname = nickname
self._my_version = my_version self._my_version = my_version
self._oldest_supported = oldest_supported self._oldest_supported = oldest_supported
@ -114,7 +126,7 @@ class IntroducerClient(service.Service, Referenceable):
def _save_announcements(self): def _save_announcements(self):
announcements = [] announcements = []
for _, value in self._inbound_announcements.items(): for value in self._inbound_announcements.values():
ann, key_s, time_stamp = value ann, key_s, time_stamp = value
# On Python 2, bytes strings are encoded into YAML Unicode strings. # On Python 2, bytes strings are encoded into YAML Unicode strings.
# On Python 3, bytes are encoded as YAML bytes. To minimize # On Python 3, bytes are encoded as YAML bytes. To minimize
@ -125,7 +137,7 @@ class IntroducerClient(service.Service, Referenceable):
} }
announcements.append(server_params) announcements.append(server_params)
announcement_cache_yaml = yamlutil.safe_dump(announcements) announcement_cache_yaml = yamlutil.safe_dump(announcements)
if isinstance(announcement_cache_yaml, unicode): if isinstance(announcement_cache_yaml, str):
announcement_cache_yaml = announcement_cache_yaml.encode("utf-8") announcement_cache_yaml = announcement_cache_yaml.encode("utf-8")
self._cache_filepath.setContent(announcement_cache_yaml) self._cache_filepath.setContent(announcement_cache_yaml)
@ -170,7 +182,7 @@ class IntroducerClient(service.Service, Referenceable):
self._local_subscribers.append( (service_name,cb,args,kwargs) ) self._local_subscribers.append( (service_name,cb,args,kwargs) )
self._subscribed_service_names.add(service_name) self._subscribed_service_names.add(service_name)
self._maybe_subscribe() self._maybe_subscribe()
for index,(ann,key_s,when) in self._inbound_announcements.items(): for index,(ann,key_s,when) in list(self._inbound_announcements.items()):
precondition(isinstance(key_s, bytes), key_s) precondition(isinstance(key_s, bytes), key_s)
servicename = index[0] servicename = index[0]
if servicename == service_name: if servicename == service_name:
@ -215,7 +227,7 @@ class IntroducerClient(service.Service, Referenceable):
self._outbound_announcements[service_name] = ann_d self._outbound_announcements[service_name] = ann_d
# publish all announcements with the new seqnum and nonce # publish all announcements with the new seqnum and nonce
for service_name,ann_d in self._outbound_announcements.items(): for service_name,ann_d in list(self._outbound_announcements.items()):
ann_d["seqnum"] = current_seqnum ann_d["seqnum"] = current_seqnum
ann_d["nonce"] = current_nonce ann_d["nonce"] = current_nonce
ann_t = sign_to_foolscap(ann_d, signing_key) ann_t = sign_to_foolscap(ann_d, signing_key)
@ -227,7 +239,7 @@ class IntroducerClient(service.Service, Referenceable):
self.log("want to publish, but no introducer yet", level=log.NOISY) self.log("want to publish, but no introducer yet", level=log.NOISY)
return return
# this re-publishes everything. The Introducer ignores duplicates # this re-publishes everything. The Introducer ignores duplicates
for ann_t in self._published_announcements.values(): for ann_t in list(self._published_announcements.values()):
self._debug_counts["outbound_message"] += 1 self._debug_counts["outbound_message"] += 1
self._debug_outstanding += 1 self._debug_outstanding += 1
d = self._publisher.callRemote("publish_v2", ann_t, self._canary) d = self._publisher.callRemote("publish_v2", ann_t, self._canary)
@ -267,7 +279,7 @@ class IntroducerClient(service.Service, Referenceable):
return return
# for ASCII values, simplejson might give us unicode *or* bytes # for ASCII values, simplejson might give us unicode *or* bytes
if "nickname" in ann and isinstance(ann["nickname"], bytes): if "nickname" in ann and isinstance(ann["nickname"], bytes):
ann["nickname"] = unicode(ann["nickname"]) ann["nickname"] = str(ann["nickname"])
nick_s = ann.get("nickname",u"").encode("utf-8") nick_s = ann.get("nickname",u"").encode("utf-8")
lp2 = self.log(format="announcement for nickname '%(nick)s', service=%(svc)s: %(ann)s", lp2 = self.log(format="announcement for nickname '%(nick)s', service=%(svc)s: %(ann)s",
nick=nick_s, svc=service_name, ann=ann, umid="BoKEag") nick=nick_s, svc=service_name, ann=ann, umid="BoKEag")

View File

@ -1,4 +1,14 @@
from past.builtins import unicode """
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
import re import re
from allmydata.crypto.util import remove_prefix from allmydata.crypto.util import remove_prefix
@ -8,14 +18,12 @@ from allmydata.util import base32, rrefutil, jsonbytes as json
def get_tubid_string_from_ann(ann): def get_tubid_string_from_ann(ann):
furl = ann.get("anonymous-storage-FURL") or ann.get("FURL") furl = ann.get("anonymous-storage-FURL") or ann.get("FURL")
if isinstance(furl, unicode):
furl = furl.encode("utf-8")
return get_tubid_string(furl) return get_tubid_string(furl)
def get_tubid_string(furl): def get_tubid_string(furl):
m = re.match(br'pb://(\w+)@', furl) m = re.match(r'pb://(\w+)@', furl)
assert m assert m
return m.group(1).lower() return m.group(1).lower().encode("ascii")
def sign_to_foolscap(announcement, signing_key): def sign_to_foolscap(announcement, signing_key):

View File

@ -1,5 +1,18 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from past.builtins import long from past.builtins import long
from six import ensure_str, ensure_text from six import ensure_text
import time, os.path, textwrap import time, os.path, textwrap
from zope.interface import implementer from zope.interface import implementer
@ -157,7 +170,7 @@ class IntroducerService(service.MultiService, Referenceable):
# 'subscriber_info' is a dict, provided directly by v2 clients. The # 'subscriber_info' is a dict, provided directly by v2 clients. The
# expected keys are: version, nickname, app-versions, my-version, # expected keys are: version, nickname, app-versions, my-version,
# oldest-supported # oldest-supported
self._subscribers = {} self._subscribers = dictutil.UnicodeKeyDict({})
self._debug_counts = {"inbound_message": 0, self._debug_counts = {"inbound_message": 0,
"inbound_duplicate": 0, "inbound_duplicate": 0,
@ -181,7 +194,7 @@ class IntroducerService(service.MultiService, Referenceable):
def get_announcements(self): def get_announcements(self):
"""Return a list of AnnouncementDescriptor for all announcements""" """Return a list of AnnouncementDescriptor for all announcements"""
announcements = [] announcements = []
for (index, (_, canary, ann, when)) in self._announcements.items(): for (index, (_, canary, ann, when)) in list(self._announcements.items()):
ad = AnnouncementDescriptor(when, index, canary, ann) ad = AnnouncementDescriptor(when, index, canary, ann)
announcements.append(ad) announcements.append(ad)
return announcements return announcements
@ -189,8 +202,8 @@ class IntroducerService(service.MultiService, Referenceable):
def get_subscribers(self): def get_subscribers(self):
"""Return a list of SubscriberDescriptor objects for all subscribers""" """Return a list of SubscriberDescriptor objects for all subscribers"""
s = [] s = []
for service_name, subscriptions in self._subscribers.items(): for service_name, subscriptions in list(self._subscribers.items()):
for rref,(subscriber_info,when) in subscriptions.items(): for rref,(subscriber_info,when) in list(subscriptions.items()):
# note that if the subscriber didn't do Tub.setLocation, # note that if the subscriber didn't do Tub.setLocation,
# tubid will be None. Also, subscribers do not tell us which # tubid will be None. Also, subscribers do not tell us which
# pubkey they use; only publishers do that. # pubkey they use; only publishers do that.
@ -281,7 +294,7 @@ class IntroducerService(service.MultiService, Referenceable):
def remote_subscribe_v2(self, subscriber, service_name, subscriber_info): def remote_subscribe_v2(self, subscriber, service_name, subscriber_info):
self.log("introducer: subscription[%s] request at %s" self.log("introducer: subscription[%s] request at %s"
% (service_name, subscriber), umid="U3uzLg") % (service_name, subscriber), umid="U3uzLg")
service_name = ensure_str(service_name) service_name = ensure_text(service_name)
subscriber_info = dictutil.UnicodeKeyDict({ subscriber_info = dictutil.UnicodeKeyDict({
ensure_text(k): v for (k, v) in subscriber_info.items() ensure_text(k): v for (k, v) in subscriber_info.items()
}) })
@ -307,11 +320,11 @@ class IntroducerService(service.MultiService, Referenceable):
subscribers.pop(subscriber, None) subscribers.pop(subscriber, None)
subscriber.notifyOnDisconnect(_remove) subscriber.notifyOnDisconnect(_remove)
# Make sure types are correct:
for k in self._announcements:
assert isinstance(k[0], type(service_name))
# now tell them about any announcements they're interested in # now tell them about any announcements they're interested in
assert {type(service_name)}.issuperset(
set(type(k[0]) for k in self._announcements)), (
service_name, self._announcements.keys()
)
announcements = set( [ ann_t announcements = set( [ ann_t
for idx,(ann_t,canary,ann,when) for idx,(ann_t,canary,ann,when)
in self._announcements.items() in self._announcements.items()

View File

@ -318,7 +318,6 @@ def write_client_config(c, config):
c.write("[client]\n") c.write("[client]\n")
c.write("helper.furl =\n") c.write("helper.furl =\n")
c.write("#stats_gatherer.furl =\n")
c.write("\n") c.write("\n")
c.write("# Encoding parameters this client will use for newly-uploaded files\n") c.write("# Encoding parameters this client will use for newly-uploaded files\n")
c.write("# This can be changed at any time: the encoding is saved in\n") c.write("# This can be changed at any time: the encoding is saved in\n")

View File

@ -10,7 +10,6 @@ from twisted.application.service import Service
from allmydata.scripts.default_nodedir import _default_nodedir from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util import fileutil from allmydata.util import fileutil
from allmydata.node import read_config
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
from allmydata.util.configutil import UnknownConfigError from allmydata.util.configutil import UnknownConfigError
from allmydata.util.deferredutil import HookMixin from allmydata.util.deferredutil import HookMixin
@ -47,8 +46,8 @@ def get_pid_from_pidfile(pidfile):
def identify_node_type(basedir): def identify_node_type(basedir):
""" """
:return unicode: None or one of: 'client', 'introducer', :return unicode: None or one of: 'client', 'introducer', or
'key-generator' or 'stats-gatherer' 'key-generator'
""" """
tac = u'' tac = u''
try: try:
@ -59,7 +58,7 @@ def identify_node_type(basedir):
except OSError: except OSError:
return None return None
for t in (u"client", u"introducer", u"key-generator", u"stats-gatherer"): for t in (u"client", u"introducer", u"key-generator"):
if t in tac: if t in tac:
return t return t
return None return None
@ -135,7 +134,6 @@ class DaemonizeTheRealService(Service, HookMixin):
node_to_instance = { node_to_instance = {
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir), u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir), u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
u"stats-gatherer": lambda: maybeDeferred(namedAny("allmydata.stats.StatsGathererService"), read_config(self.basedir, None), self.basedir, verbose=True),
u"key-generator": key_generator_removed, u"key-generator": key_generator_removed,
} }

View File

@ -9,7 +9,7 @@ from twisted.internet import defer, task, threads
from allmydata.scripts.common import get_default_nodedir from allmydata.scripts.common import get_default_nodedir
from allmydata.scripts import debug, create_node, cli, \ from allmydata.scripts import debug, create_node, cli, \
stats_gatherer, admin, tahoe_daemonize, tahoe_start, \ admin, tahoe_daemonize, tahoe_start, \
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
from allmydata.util.eliotutil import ( from allmydata.util.eliotutil import (
@ -60,7 +60,6 @@ class Options(usage.Options):
stderr = sys.stderr stderr = sys.stderr
subCommands = ( create_node.subCommands subCommands = ( create_node.subCommands
+ stats_gatherer.subCommands
+ admin.subCommands + admin.subCommands
+ process_control_commands + process_control_commands
+ debug.subCommands + debug.subCommands
@ -107,7 +106,7 @@ class Options(usage.Options):
create_dispatch = {} create_dispatch = {}
for module in (create_node, stats_gatherer): for module in (create_node,):
create_dispatch.update(module.dispatch) create_dispatch.update(module.dispatch)
def parse_options(argv, config=None): def parse_options(argv, config=None):

View File

@ -1,103 +0,0 @@
from __future__ import print_function
import os
# Python 2 compatibility
from future.utils import PY2
if PY2:
from future.builtins import str # noqa: F401
from twisted.python import usage
from allmydata.scripts.common import NoDefaultBasedirOptions
from allmydata.scripts.create_node import write_tac
from allmydata.util.assertutil import precondition
from allmydata.util.encodingutil import listdir_unicode, quote_output
from allmydata.util import fileutil, iputil
class CreateStatsGathererOptions(NoDefaultBasedirOptions):
subcommand_name = "create-stats-gatherer"
optParameters = [
("hostname", None, None, "Hostname of this machine, used to build location"),
("location", None, None, "FURL connection hints, e.g. 'tcp:HOSTNAME:PORT'"),
("port", None, None, "listening endpoint, e.g. 'tcp:PORT'"),
]
def postOptions(self):
if self["hostname"] and (not self["location"]) and (not self["port"]):
pass
elif (not self["hostname"]) and self["location"] and self["port"]:
pass
else:
raise usage.UsageError("You must provide --hostname, or --location and --port.")
description = """
Create a "stats-gatherer" service, which is a standalone process that
collects and stores runtime statistics from many server nodes. This is a
tool for operations personnel to keep track of free disk space, server
load, and protocol activity, across a fleet of Tahoe storage servers.
The "stats-gatherer" listens on a TCP port and publishes a Foolscap FURL
by writing it into a file named "stats_gatherer.furl". You must copy this
FURL into the servers' tahoe.cfg, as the [client] stats_gatherer.furl=
entry. Those servers will then establish a connection to the
stats-gatherer and publish their statistics on a periodic basis. The
gatherer writes a summary JSON file out to disk after each update.
The stats-gatherer listens on a configurable port, and writes a
configurable hostname+port pair into the FURL that it publishes. There
are two configuration modes you can use.
* In the first, you provide --hostname=, and the service chooses its own
TCP port number. If the host is named "example.org" and you provide
--hostname=example.org, the node will pick a port number (e.g. 12345)
and use location="tcp:example.org:12345" and port="tcp:12345".
* In the second, you provide both --location= and --port=, and the
service will refrain from doing any allocation of its own. --location=
must be a Foolscap "FURL connection hint sequence", which is a
comma-separated list of "tcp:HOSTNAME:PORTNUM" strings. --port= must be
a Twisted server endpoint specification, which is generally
"tcp:PORTNUM". So, if your host is named "example.org" and you want to
use port 6789, you should provide --location=tcp:example.org:6789 and
--port=tcp:6789. You are responsible for making sure --location= and
--port= match each other.
"""
def create_stats_gatherer(config):
err = config.stderr
basedir = config['basedir']
# This should always be called with an absolute Unicode basedir.
precondition(isinstance(basedir, str), basedir)
if os.path.exists(basedir):
if listdir_unicode(basedir):
print("The base directory %s is not empty." % quote_output(basedir), file=err)
print("To avoid clobbering anything, I am going to quit now.", file=err)
print("Please use a different directory, or empty this one.", file=err)
return -1
# we're willing to use an empty directory
else:
os.mkdir(basedir)
write_tac(basedir, "stats-gatherer")
if config["hostname"]:
portnum = iputil.allocate_tcp_port()
location = "tcp:%s:%d" % (config["hostname"], portnum)
port = "tcp:%d" % portnum
else:
location = config["location"]
port = config["port"]
fileutil.write(os.path.join(basedir, "location"), location+"\n")
fileutil.write(os.path.join(basedir, "port"), port+"\n")
return 0
subCommands = [
["create-stats-gatherer", None, CreateStatsGathererOptions, "Create a stats-gatherer service."],
]
dispatch = {
"create-stats-gatherer": create_stats_gatherer,
}

View File

@ -1,4 +1,5 @@
from __future__ import print_function from __future__ import print_function
from __future__ import unicode_literals
import os.path import os.path
import codecs import codecs
@ -10,7 +11,7 @@ from allmydata import uri
from allmydata.scripts.common_http import do_http, check_http_error from allmydata.scripts.common_http import do_http, check_http_error
from allmydata.scripts.common import get_aliases from allmydata.scripts.common import get_aliases
from allmydata.util.fileutil import move_into_place from allmydata.util.fileutil import move_into_place
from allmydata.util.encodingutil import unicode_to_output, quote_output from allmydata.util.encodingutil import quote_output, quote_output_u
def add_line_to_aliasfile(aliasfile, alias, cap): def add_line_to_aliasfile(aliasfile, alias, cap):
@ -48,14 +49,13 @@ def add_alias(options):
old_aliases = get_aliases(nodedir) old_aliases = get_aliases(nodedir)
if alias in old_aliases: if alias in old_aliases:
print("Alias %s already exists!" % quote_output(alias), file=stderr) show_output(stderr, "Alias {alias} already exists!", alias=alias)
return 1 return 1
aliasfile = os.path.join(nodedir, "private", "aliases") aliasfile = os.path.join(nodedir, "private", "aliases")
cap = uri.from_string_dirnode(cap).to_string() cap = uri.from_string_dirnode(cap).to_string()
add_line_to_aliasfile(aliasfile, alias, cap) add_line_to_aliasfile(aliasfile, alias, cap)
show_output(stdout, "Alias {alias} added", alias=alias)
print("Alias %s added" % quote_output(alias), file=stdout)
return 0 return 0
def create_alias(options): def create_alias(options):
@ -75,7 +75,7 @@ def create_alias(options):
old_aliases = get_aliases(nodedir) old_aliases = get_aliases(nodedir)
if alias in old_aliases: if alias in old_aliases:
print("Alias %s already exists!" % quote_output(alias), file=stderr) show_output(stderr, "Alias {alias} already exists!", alias=alias)
return 1 return 1
aliasfile = os.path.join(nodedir, "private", "aliases") aliasfile = os.path.join(nodedir, "private", "aliases")
@ -93,11 +93,51 @@ def create_alias(options):
# probably check for others.. # probably check for others..
add_line_to_aliasfile(aliasfile, alias, new_uri) add_line_to_aliasfile(aliasfile, alias, new_uri)
show_output(stdout, "Alias {alias} created", alias=alias)
print("Alias %s created" % (quote_output(alias),), file=stdout)
return 0 return 0
def show_output(fp, template, **kwargs):
"""
Print to just about anything.
:param fp: A file-like object to which to print. This handles the case
where ``fp`` declares a support encoding with the ``encoding``
attribute (eg sys.stdout on Python 3). It handles the case where
``fp`` declares no supported encoding via ``None`` for its
``encoding`` attribute (eg sys.stdout on Python 2 when stdout is not a
tty). It handles the case where ``fp`` declares an encoding that does
not support all of the characters in the output by forcing the
"namereplace" error handler. It handles the case where there is no
``encoding`` attribute at all (eg StringIO.StringIO) by writing
utf-8-encoded bytes.
"""
assert isinstance(template, unicode)
# On Python 3 fp has an encoding attribute under all real usage. On
# Python 2, the encoding attribute is None if stdio is not a tty. The
# test suite often passes StringIO which has no such attribute. Make
# allowances for this until the test suite is fixed and Python 2 is no
# more.
try:
encoding = fp.encoding or "utf-8"
except AttributeError:
has_encoding = False
encoding = "utf-8"
else:
has_encoding = True
output = template.format(**{
k: quote_output_u(v, encoding=encoding)
for (k, v)
in kwargs.items()
})
safe_output = output.encode(encoding, "namereplace")
if has_encoding:
safe_output = safe_output.decode(encoding)
print(safe_output, file=fp)
def _get_alias_details(nodedir): def _get_alias_details(nodedir):
aliases = get_aliases(nodedir) aliases = get_aliases(nodedir)
alias_names = sorted(aliases.keys()) alias_names = sorted(aliases.keys())
@ -111,34 +151,45 @@ def _get_alias_details(nodedir):
return data return data
def _escape_format(t):
"""
_escape_format(t).format() == t
:param unicode t: The text to escape.
"""
return t.replace("{", "{{").replace("}", "}}")
def list_aliases(options): def list_aliases(options):
nodedir = options['node-directory'] """
stdout = options.stdout Show aliases that exist.
stderr = options.stderr """
data = _get_alias_details(options['node-directory'])
data = _get_alias_details(nodedir)
max_width = max([len(quote_output(name)) for name in data.keys()] + [0])
fmt = "%" + str(max_width) + "s: %s"
rc = 0
if options['json']: if options['json']:
try: output = _escape_format(json.dumps(data, indent=4).decode("ascii"))
# XXX why are we presuming utf-8 output?
print(json.dumps(data, indent=4).decode('utf-8'), file=stdout)
except (UnicodeEncodeError, UnicodeDecodeError):
print(json.dumps(data, indent=4), file=stderr)
rc = 1
else: else:
for name, details in data.items(): def dircap(details):
dircap = details['readonly'] if options['readonly-uri'] else details['readwrite'] return (
try: details['readonly']
print(fmt % (unicode_to_output(name), unicode_to_output(dircap.decode('utf-8'))), file=stdout) if options['readonly-uri']
except (UnicodeEncodeError, UnicodeDecodeError): else details['readwrite']
print(fmt % (quote_output(name), quote_output(dircap)), file=stderr) ).decode("utf-8")
rc = 1
if rc == 1: def format_dircap(name, details):
print("\nThis listing included aliases or caps that could not be converted to the terminal" \ return fmt % (name, dircap(details))
"\noutput encoding. These are shown using backslash escapes and in quotes.", file=stderr)
return rc max_width = max([len(quote_output(name)) for name in data.keys()] + [0])
fmt = "%" + str(max_width) + "s: %s"
output = "\n".join(list(
format_dircap(name, details)
for name, details
in data.items()
))
if output:
# Show whatever we computed. Skip this if there is no output to avoid
# a spurious blank line.
show_output(options.stdout, output)
return 0

View File

@ -1,79 +1,19 @@
from __future__ import print_function from __future__ import print_function
import json
import os
import pprint
import time import time
from collections import deque
# Python 2 compatibility # Python 2 compatibility
from future.utils import PY2 from future.utils import PY2
if PY2: if PY2:
from future.builtins import str # noqa: F401 from future.builtins import str # noqa: F401
from twisted.internet import reactor
from twisted.application import service from twisted.application import service
from twisted.application.internet import TimerService from twisted.application.internet import TimerService
from zope.interface import implementer from zope.interface import implementer
from foolscap.api import eventually, DeadReferenceError, Referenceable, Tub from foolscap.api import eventually
from allmydata.util import log from allmydata.util import log
from allmydata.util.encodingutil import quote_local_unicode_path from allmydata.interfaces import IStatsProducer
from allmydata.interfaces import RIStatsProvider, RIStatsGatherer, IStatsProducer
@implementer(IStatsProducer)
class LoadMonitor(service.MultiService):
loop_interval = 1
num_samples = 60
def __init__(self, provider, warn_if_delay_exceeds=1):
service.MultiService.__init__(self)
self.provider = provider
self.warn_if_delay_exceeds = warn_if_delay_exceeds
self.started = False
self.last = None
self.stats = deque()
self.timer = None
def startService(self):
if not self.started:
self.started = True
self.timer = reactor.callLater(self.loop_interval, self.loop)
service.MultiService.startService(self)
def stopService(self):
self.started = False
if self.timer:
self.timer.cancel()
self.timer = None
return service.MultiService.stopService(self)
def loop(self):
self.timer = None
if not self.started:
return
now = time.time()
if self.last is not None:
delay = now - self.last - self.loop_interval
if delay > self.warn_if_delay_exceeds:
log.msg(format='excessive reactor delay (%ss)', args=(delay,),
level=log.UNUSUAL)
self.stats.append(delay)
while len(self.stats) > self.num_samples:
self.stats.popleft()
self.last = now
self.timer = reactor.callLater(self.loop_interval, self.loop)
def get_stats(self):
if self.stats:
avg = sum(self.stats) / len(self.stats)
m_x = max(self.stats)
else:
avg = m_x = 0
return { 'load_monitor.avg_load': avg,
'load_monitor.max_load': m_x, }
@implementer(IStatsProducer) @implementer(IStatsProducer)
class CPUUsageMonitor(service.MultiService): class CPUUsageMonitor(service.MultiService):
@ -128,37 +68,18 @@ class CPUUsageMonitor(service.MultiService):
return s return s
@implementer(RIStatsProvider) class StatsProvider(service.MultiService):
class StatsProvider(Referenceable, service.MultiService):
def __init__(self, node, gatherer_furl): def __init__(self, node):
service.MultiService.__init__(self) service.MultiService.__init__(self)
self.node = node self.node = node
self.gatherer_furl = gatherer_furl # might be None
self.counters = {} self.counters = {}
self.stats_producers = [] self.stats_producers = []
# only run the LoadMonitor (which submits a timer every second) if
# there is a gatherer who is going to be paying attention. Our stats
# are visible through HTTP even without a gatherer, so run the rest
# of the stats (including the once-per-minute CPUUsageMonitor)
if gatherer_furl:
self.load_monitor = LoadMonitor(self)
self.load_monitor.setServiceParent(self)
self.register_producer(self.load_monitor)
self.cpu_monitor = CPUUsageMonitor() self.cpu_monitor = CPUUsageMonitor()
self.cpu_monitor.setServiceParent(self) self.cpu_monitor.setServiceParent(self)
self.register_producer(self.cpu_monitor) self.register_producer(self.cpu_monitor)
def startService(self):
if self.node and self.gatherer_furl:
nickname_utf8 = self.node.nickname.encode("utf-8")
self.node.tub.connectTo(self.gatherer_furl,
self._connected, nickname_utf8)
service.MultiService.startService(self)
def count(self, name, delta=1): def count(self, name, delta=1):
if isinstance(name, str): if isinstance(name, str):
name = name.encode("utf-8") name = name.encode("utf-8")
@ -175,155 +96,3 @@ class StatsProvider(Referenceable, service.MultiService):
ret = { 'counters': self.counters, 'stats': stats } ret = { 'counters': self.counters, 'stats': stats }
log.msg(format='get_stats() -> %(stats)s', stats=ret, level=log.NOISY) log.msg(format='get_stats() -> %(stats)s', stats=ret, level=log.NOISY)
return ret return ret
def remote_get_stats(self):
# The remote API expects keys to be bytes:
def to_bytes(d):
result = {}
for (k, v) in d.items():
if isinstance(k, str):
k = k.encode("utf-8")
result[k] = v
return result
stats = self.get_stats()
return {b"counters": to_bytes(stats["counters"]),
b"stats": to_bytes(stats["stats"])}
def _connected(self, gatherer, nickname):
gatherer.callRemoteOnly('provide', self, nickname or '')
@implementer(RIStatsGatherer)
class StatsGatherer(Referenceable, service.MultiService):
poll_interval = 60
def __init__(self, basedir):
service.MultiService.__init__(self)
self.basedir = basedir
self.clients = {}
self.nicknames = {}
self.timer = TimerService(self.poll_interval, self.poll)
self.timer.setServiceParent(self)
def get_tubid(self, rref):
return rref.getRemoteTubID()
def remote_provide(self, provider, nickname):
tubid = self.get_tubid(provider)
if tubid == '<unauth>':
print("WARNING: failed to get tubid for %s (%s)" % (provider, nickname))
# don't add to clients to poll (polluting data) don't care about disconnect
return
self.clients[tubid] = provider
self.nicknames[tubid] = nickname
def poll(self):
for tubid,client in self.clients.items():
nickname = self.nicknames.get(tubid)
d = client.callRemote('get_stats')
d.addCallbacks(self.got_stats, self.lost_client,
callbackArgs=(tubid, nickname),
errbackArgs=(tubid,))
d.addErrback(self.log_client_error, tubid)
def lost_client(self, f, tubid):
# this is called lazily, when a get_stats request fails
del self.clients[tubid]
del self.nicknames[tubid]
f.trap(DeadReferenceError)
def log_client_error(self, f, tubid):
log.msg("StatsGatherer: error in get_stats(), peerid=%s" % tubid,
level=log.UNUSUAL, failure=f)
def got_stats(self, stats, tubid, nickname):
raise NotImplementedError()
class StdOutStatsGatherer(StatsGatherer):
verbose = True
def remote_provide(self, provider, nickname):
tubid = self.get_tubid(provider)
if self.verbose:
print('connect "%s" [%s]' % (nickname, tubid))
provider.notifyOnDisconnect(self.announce_lost_client, tubid)
StatsGatherer.remote_provide(self, provider, nickname)
def announce_lost_client(self, tubid):
print('disconnect "%s" [%s]' % (self.nicknames[tubid], tubid))
def got_stats(self, stats, tubid, nickname):
print('"%s" [%s]:' % (nickname, tubid))
pprint.pprint(stats)
class JSONStatsGatherer(StdOutStatsGatherer):
# inherit from StdOutStatsGatherer for connect/disconnect notifications
def __init__(self, basedir=u".", verbose=True):
self.verbose = verbose
StatsGatherer.__init__(self, basedir)
self.jsonfile = os.path.join(basedir, "stats.json")
if os.path.exists(self.jsonfile):
try:
with open(self.jsonfile, 'rb') as f:
self.gathered_stats = json.load(f)
except Exception:
print("Error while attempting to load stats file %s.\n"
"You may need to restore this file from a backup,"
" or delete it if no backup is available.\n" %
quote_local_unicode_path(self.jsonfile))
raise
else:
self.gathered_stats = {}
def got_stats(self, stats, tubid, nickname):
s = self.gathered_stats.setdefault(tubid, {})
s['timestamp'] = time.time()
s['nickname'] = nickname
s['stats'] = stats
self.dump_json()
def dump_json(self):
tmp = "%s.tmp" % (self.jsonfile,)
with open(tmp, 'wb') as f:
json.dump(self.gathered_stats, f)
if os.path.exists(self.jsonfile):
os.unlink(self.jsonfile)
os.rename(tmp, self.jsonfile)
class StatsGathererService(service.MultiService):
furl_file = "stats_gatherer.furl"
def __init__(self, basedir=".", verbose=False):
service.MultiService.__init__(self)
self.basedir = basedir
self.tub = Tub(certFile=os.path.join(self.basedir,
"stats_gatherer.pem"))
self.tub.setServiceParent(self)
self.tub.setOption("logLocalFailures", True)
self.tub.setOption("logRemoteFailures", True)
self.tub.setOption("expose-remote-exception-types", False)
self.stats_gatherer = JSONStatsGatherer(self.basedir, verbose)
self.stats_gatherer.setServiceParent(self)
try:
with open(os.path.join(self.basedir, "location")) as f:
location = f.read().strip()
except EnvironmentError:
raise ValueError("Unable to find 'location' in BASEDIR, please rebuild your stats-gatherer")
try:
with open(os.path.join(self.basedir, "port")) as f:
port = f.read().strip()
except EnvironmentError:
raise ValueError("Unable to find 'port' in BASEDIR, please rebuild your stats-gatherer")
self.tub.listenOn(port)
self.tub.setLocation(location)
ff = os.path.join(self.basedir, self.furl_file)
self.gatherer_furl = self.tub.registerReference(self.stats_gatherer,
furlFile=ff)

View File

@ -176,6 +176,9 @@ class StorageFarmBroker(service.MultiService):
I'm also responsible for subscribing to the IntroducerClient to find out I'm also responsible for subscribing to the IntroducerClient to find out
about new servers as they are announced by the Introducer. about new servers as they are announced by the Introducer.
:ivar _tub_maker: A one-argument callable which accepts a dictionary of
"handler overrides" and returns a ``foolscap.api.Tub``.
:ivar StorageClientConfig storage_client_config: Values from the node :ivar StorageClientConfig storage_client_config: Values from the node
configuration file relating to storage behavior. configuration file relating to storage behavior.
""" """
@ -603,7 +606,11 @@ class _FoolscapStorage(object):
} }
*nickname* and *grid-manager-certificates* are optional. *nickname* and *grid-manager-certificates* are optional.
The furl will be a Unicode string on Python 3; on Python 2 it will be
either a native (bytes) string or a Unicode string.
""" """
furl = furl.encode("utf-8")
m = re.match(br'pb://(\w+)@', furl) m = re.match(br'pb://(\w+)@', furl)
assert m, furl assert m, furl
tubid_s = m.group(1).lower() tubid_s = m.group(1).lower()
@ -739,7 +746,6 @@ class NativeStorageServer(service.MultiService):
@ivar nickname: the server's self-reported nickname (unicode), same @ivar nickname: the server's self-reported nickname (unicode), same
@ivar rref: the RemoteReference, if connected, otherwise None @ivar rref: the RemoteReference, if connected, otherwise None
@ivar remote_host: the IAddress, if connected, otherwise None
""" """
VERSION_DEFAULTS = UnicodeKeyDict({ VERSION_DEFAULTS = UnicodeKeyDict({
@ -771,7 +777,6 @@ class NativeStorageServer(service.MultiService):
self.last_connect_time = None self.last_connect_time = None
self.last_loss_time = None self.last_loss_time = None
self.remote_host = None
self._rref = None self._rref = None
self._is_connected = False self._is_connected = False
self._reconnector = None self._reconnector = None
@ -825,7 +830,7 @@ class NativeStorageServer(service.MultiService):
else: else:
return _FoolscapStorage.from_announcement( return _FoolscapStorage.from_announcement(
self._server_id, self._server_id,
furl.encode("utf-8"), furl,
ann, ann,
storage_server, storage_server,
) )
@ -837,8 +842,6 @@ class NativeStorageServer(service.MultiService):
# Nope # Nope
pass pass
else: else:
if isinstance(furl, str):
furl = furl.encode("utf-8")
# See comment above for the _storage_from_foolscap_plugin case # See comment above for the _storage_from_foolscap_plugin case
# about passing in get_rref. # about passing in get_rref.
storage_server = _StorageServer(get_rref=self.get_rref) storage_server = _StorageServer(get_rref=self.get_rref)
@ -895,8 +898,6 @@ class NativeStorageServer(service.MultiService):
return None return None
def get_announcement(self): def get_announcement(self):
return self.announcement return self.announcement
def get_remote_host(self):
return self.remote_host
def get_connection_status(self): def get_connection_status(self):
last_received = None last_received = None
@ -944,7 +945,6 @@ class NativeStorageServer(service.MultiService):
level=log.NOISY, parent=lp) level=log.NOISY, parent=lp)
self.last_connect_time = time.time() self.last_connect_time = time.time()
self.remote_host = rref.getLocationHints()
self._rref = rref self._rref = rref
self._is_connected = True self._is_connected = True
rref.notifyOnDisconnect(self._lost) rref.notifyOnDisconnect(self._lost)
@ -970,7 +970,6 @@ class NativeStorageServer(service.MultiService):
# get_connected_servers() or get_servers_for_psi()) can continue to # get_connected_servers() or get_servers_for_psi()) can continue to
# use s.get_rref().callRemote() and not worry about it being None. # use s.get_rref().callRemote() and not worry about it being None.
self._is_connected = False self._is_connected = False
self.remote_host = None
def stop_connecting(self): def stop_connecting(self):
# used when this descriptor has been superceded by another # used when this descriptor has been superceded by another

View File

@ -1,6 +1,6 @@
from ...util.encodingutil import unicode_to_argv from ...util.encodingutil import unicode_to_argv
from ...scripts import runner from ...scripts import runner
from ..common_util import ReallyEqualMixin, run_cli from ..common_util import ReallyEqualMixin, run_cli, run_cli_unicode
def parse_options(basedir, command, args): def parse_options(basedir, command, args):
o = runner.Options() o = runner.Options()
@ -10,10 +10,41 @@ def parse_options(basedir, command, args):
return o return o
class CLITestMixin(ReallyEqualMixin): class CLITestMixin(ReallyEqualMixin):
def do_cli(self, verb, *args, **kwargs): """
A mixin for use with ``GridTestMixin`` to execute CLI commands against
nodes created by methods of that mixin.
"""
def do_cli_unicode(self, verb, argv, client_num=0, **kwargs):
"""
Run a Tahoe-LAFS CLI command.
:param verb: See ``run_cli_unicode``.
:param argv: See ``run_cli_unicode``.
:param int client_num: The number of the ``GridTestMixin``-created
node against which to execute the command.
:param kwargs: Additional keyword arguments to pass to
``run_cli_unicode``.
"""
# client_num is used to execute client CLI commands on a specific # client_num is used to execute client CLI commands on a specific
# client. # client.
client_num = kwargs.get("client_num", 0) client_dir = self.get_clientdir(i=client_num)
nodeargs = [ u"--node-directory", client_dir ]
return run_cli_unicode(verb, argv, nodeargs=nodeargs, **kwargs)
def do_cli(self, verb, *args, **kwargs):
"""
Like ``do_cli_unicode`` but work with ``bytes`` everywhere instead of
``unicode``.
Where possible, prefer ``do_cli_unicode``.
"""
# client_num is used to execute client CLI commands on a specific
# client.
client_num = kwargs.pop("client_num", 0)
client_dir = unicode_to_argv(self.get_clientdir(i=client_num)) client_dir = unicode_to_argv(self.get_clientdir(i=client_num))
nodeargs = [ "--node-directory", client_dir ] nodeargs = [ b"--node-directory", client_dir ]
return run_cli(verb, nodeargs=nodeargs, *args, **kwargs) return run_cli(verb, *args, nodeargs=nodeargs, **kwargs)

View File

@ -1,105 +1,126 @@
import json import json
from mock import patch
from twisted.trial import unittest from twisted.trial import unittest
from twisted.internet.defer import inlineCallbacks from twisted.internet.defer import inlineCallbacks
from allmydata.util.encodingutil import unicode_to_argv
from allmydata.scripts.common import get_aliases from allmydata.scripts.common import get_aliases
from allmydata.test.no_network import GridTestMixin from allmydata.test.no_network import GridTestMixin
from .common import CLITestMixin from .common import CLITestMixin
from ..common_util import skip_if_cannot_represent_argv from allmydata.util import encodingutil
# see also test_create_alias # see also test_create_alias
class ListAlias(GridTestMixin, CLITestMixin, unittest.TestCase): class ListAlias(GridTestMixin, CLITestMixin, unittest.TestCase):
@inlineCallbacks @inlineCallbacks
def test_list(self): def _check_create_alias(self, alias, encoding):
self.basedir = "cli/ListAlias/test_list" """
Verify that ``tahoe create-alias`` can be used to create an alias named
``alias`` when argv is encoded using ``encoding``.
:param unicode alias: The alias to try to create.
:param NoneType|str encoding: The name of an encoding to force the
``create-alias`` implementation to use. This simulates the
effects of setting LANG and doing other locale-foolishness without
actually having to mess with this process's global locale state.
If this is ``None`` then the encoding used will be ascii but the
stdio objects given to the code under test will not declare any
encoding (this is like Python 2 when stdio is not a tty).
:return Deferred: A Deferred that fires with success if the alias can
be created and that creation is reported on stdout appropriately
encoded or with failure if something goes wrong.
"""
self.basedir = self.mktemp()
self.set_up_grid(oneshare=True) self.set_up_grid(oneshare=True)
rc, stdout, stderr = yield self.do_cli( # We can pass an encoding into the test utilities to invoke the code
"create-alias", # under test but we can't pass such a parameter directly to the code
unicode_to_argv(u"tahoe"), # under test. Instead, that code looks at io_encoding. So,
# monkey-patch that value to our desired value here. This is the code
# that most directly takes the place of messing with LANG or the
# locale module.
self.patch(encodingutil, "io_encoding", encoding or "ascii")
rc, stdout, stderr = yield self.do_cli_unicode(
u"create-alias",
[alias],
encoding=encoding,
) )
self.failUnless(unicode_to_argv(u"Alias 'tahoe' created") in stdout) # Make sure the result of the create-alias command is as we want it to
self.failIf(stderr) # be.
aliases = get_aliases(self.get_clientdir()) self.assertEqual(u"Alias '{}' created\n".format(alias), stdout)
self.failUnless(u"tahoe" in aliases) self.assertEqual("", stderr)
self.failUnless(aliases[u"tahoe"].startswith("URI:DIR2:")) self.assertEqual(0, rc)
rc, stdout, stderr = yield self.do_cli("list-aliases", "--json") # Make sure it had the intended side-effect, too - an alias created in
# the node filesystem state.
aliases = get_aliases(self.get_clientdir())
self.assertIn(alias, aliases)
self.assertTrue(aliases[alias].startswith(u"URI:DIR2:"))
# And inspect the state via the user interface list-aliases command
# too.
rc, stdout, stderr = yield self.do_cli_unicode(
u"list-aliases",
[u"--json"],
encoding=encoding,
)
self.assertEqual(0, rc) self.assertEqual(0, rc)
data = json.loads(stdout) data = json.loads(stdout)
self.assertIn(u"tahoe", data) self.assertIn(alias, data)
data = data[u"tahoe"] data = data[alias]
self.assertIn("readwrite", data) self.assertIn(u"readwrite", data)
self.assertIn("readonly", data) self.assertIn(u"readonly", data)
@inlineCallbacks
def test_list_unicode_mismatch_json(self):
"""
pretty hack-y test, but we want to cover the 'except' on Unicode
errors paths and I can't come up with a nicer way to trigger
this
"""
self.basedir = "cli/ListAlias/test_list_unicode_mismatch_json"
skip_if_cannot_represent_argv(u"tahoe\u263A")
self.set_up_grid(oneshare=True)
rc, stdout, stderr = yield self.do_cli( def test_list_none(self):
"create-alias", """
unicode_to_argv(u"tahoe\u263A"), An alias composed of all ASCII-encodeable code points can be created when
stdio aren't clearly marked with an encoding.
"""
return self._check_create_alias(
u"tahoe",
encoding=None,
) )
self.failUnless(unicode_to_argv(u"Alias 'tahoe\u263A' created") in stdout)
self.failIf(stderr)
booms = [] def test_list_ascii(self):
"""
def boom(out, indent=4): An alias composed of all ASCII-encodeable code points can be created when
if not len(booms): the active encoding is ASCII.
booms.append(out) """
raise UnicodeEncodeError("foo", u"foo", 3, 5, "foo") return self._check_create_alias(
return str(out) u"tahoe",
encoding="ascii",
with patch("allmydata.scripts.tahoe_add_alias.json.dumps", boom):
aliases = get_aliases(self.get_clientdir())
self.failUnless(u"tahoe\u263A" in aliases)
self.failUnless(aliases[u"tahoe\u263A"].startswith("URI:DIR2:"))
rc, stdout, stderr = yield self.do_cli("list-aliases", "--json")
self.assertEqual(1, rc)
self.assertIn("could not be converted", stderr)
@inlineCallbacks
def test_list_unicode_mismatch(self):
self.basedir = "cli/ListAlias/test_list_unicode_mismatch"
skip_if_cannot_represent_argv(u"tahoe\u263A")
self.set_up_grid(oneshare=True)
rc, stdout, stderr = yield self.do_cli(
"create-alias",
unicode_to_argv(u"tahoe\u263A"),
) )
def boom(out):
print("boom {}".format(out))
return out
raise UnicodeEncodeError("foo", u"foo", 3, 5, "foo")
with patch("allmydata.scripts.tahoe_add_alias.unicode_to_output", boom): def test_list_latin_1(self):
self.failUnless(unicode_to_argv(u"Alias 'tahoe\u263A' created") in stdout) """
self.failIf(stderr) An alias composed of all Latin-1-encodeable code points can be created
aliases = get_aliases(self.get_clientdir()) when the active encoding is Latin-1.
self.failUnless(u"tahoe\u263A" in aliases)
self.failUnless(aliases[u"tahoe\u263A"].startswith("URI:DIR2:"))
rc, stdout, stderr = yield self.do_cli("list-aliases") This is very similar to ``test_list_utf_8`` but the assumption of
UTF-8 is nearly ubiquitous and explicitly exercising the codepaths
with a UTF-8-incompatible encoding helps flush out unintentional UTF-8
assumptions.
"""
return self._check_create_alias(
u"taho\N{LATIN SMALL LETTER E WITH ACUTE}",
encoding="latin-1",
)
self.assertEqual(1, rc)
self.assertIn("could not be converted", stderr) def test_list_utf_8(self):
"""
An alias composed of all UTF-8-encodeable code points can be created when
the active encoding is UTF-8.
"""
return self._check_create_alias(
u"tahoe\N{SNOWMAN}",
encoding="utf-8",
)

View File

@ -661,7 +661,7 @@ starting copy, 2 files, 1 directories
# This test ensures that tahoe will copy a file from the grid to # This test ensures that tahoe will copy a file from the grid to
# a local directory without a specified file name. # a local directory without a specified file name.
# https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2027 # https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2027
self.basedir = "cli/Cp/cp_verbose" self.basedir = "cli/Cp/ticket_2027"
self.set_up_grid(oneshare=True) self.set_up_grid(oneshare=True)
# Write a test file, which we'll copy to the grid. # Write a test file, which we'll copy to the grid.

View File

@ -11,6 +11,8 @@ __all__ = [
"skipIf", "skipIf",
] ]
from past.builtins import chr as byteschr
import os, random, struct import os, random, struct
import six import six
import tempfile import tempfile
@ -214,7 +216,7 @@ class UseNode(object):
:ivar FilePath basedir: The base directory of the node. :ivar FilePath basedir: The base directory of the node.
:ivar bytes introducer_furl: The introducer furl with which to :ivar str introducer_furl: The introducer furl with which to
configure the client. configure the client.
:ivar dict[bytes, bytes] node_config: Configuration items for the *node* :ivar dict[bytes, bytes] node_config: Configuration items for the *node*
@ -225,7 +227,8 @@ class UseNode(object):
plugin_config = attr.ib() plugin_config = attr.ib()
storage_plugin = attr.ib() storage_plugin = attr.ib()
basedir = attr.ib(validator=attr.validators.instance_of(FilePath)) basedir = attr.ib(validator=attr.validators.instance_of(FilePath))
introducer_furl = attr.ib(validator=attr.validators.instance_of(bytes)) introducer_furl = attr.ib(validator=attr.validators.instance_of(str),
converter=six.ensure_str)
node_config = attr.ib(default=attr.Factory(dict)) node_config = attr.ib(default=attr.Factory(dict))
config = attr.ib(default=None) config = attr.ib(default=None)
@ -1056,7 +1059,7 @@ def _corrupt_share_data_last_byte(data, debug=False):
sharedatasize = struct.unpack(">Q", data[0x0c+0x08:0x0c+0x0c+8])[0] sharedatasize = struct.unpack(">Q", data[0x0c+0x08:0x0c+0x0c+8])[0]
offset = 0x0c+0x44+sharedatasize-1 offset = 0x0c+0x44+sharedatasize-1
newdata = data[:offset] + chr(ord(data[offset])^0xFF) + data[offset+1:] newdata = data[:offset] + byteschr(ord(data[offset:offset+1])^0xFF) + data[offset+1:]
if debug: if debug:
log.msg("testing: flipping all bits of byte at offset %d: %r, newdata: %r" % (offset, data[offset], newdata[offset])) log.msg("testing: flipping all bits of byte at offset %d: %r, newdata: %r" % (offset, data[offset], newdata[offset]))
return newdata return newdata
@ -1084,7 +1087,7 @@ def _corrupt_crypttext_hash_tree_byte_x221(data, debug=False):
assert sharevernum in (1, 2), "This test is designed to corrupt immutable shares of v1 or v2 in specific ways." assert sharevernum in (1, 2), "This test is designed to corrupt immutable shares of v1 or v2 in specific ways."
if debug: if debug:
log.msg("original data: %r" % (data,)) log.msg("original data: %r" % (data,))
return data[:0x0c+0x221] + chr(ord(data[0x0c+0x221])^0x02) + data[0x0c+0x2210+1:] return data[:0x0c+0x221] + byteschr(ord(data[0x0c+0x221:0x0c+0x221+1])^0x02) + data[0x0c+0x2210+1:]
def _corrupt_block_hashes(data, debug=False): def _corrupt_block_hashes(data, debug=False):
"""Scramble the file data -- the field containing the block hash tree """Scramble the file data -- the field containing the block hash tree

View File

@ -5,6 +5,10 @@ import time
import signal import signal
from random import randrange from random import randrange
from six.moves import StringIO from six.moves import StringIO
from io import (
TextIOWrapper,
BytesIO,
)
from twisted.internet import reactor, defer from twisted.internet import reactor, defer
from twisted.python import failure from twisted.python import failure
@ -35,14 +39,66 @@ def skip_if_cannot_represent_argv(u):
except UnicodeEncodeError: except UnicodeEncodeError:
raise unittest.SkipTest("A non-ASCII argv could not be encoded on this platform.") raise unittest.SkipTest("A non-ASCII argv could not be encoded on this platform.")
def run_cli(verb, *args, **kwargs):
precondition(not [True for arg in args if not isinstance(arg, str)], def _getvalue(io):
"arguments to do_cli must be strs -- convert using unicode_to_argv", args=args) """
nodeargs = kwargs.get("nodeargs", []) Read out the complete contents of a file-like object.
"""
io.seek(0)
return io.read()
def run_cli_bytes(verb, *args, **kwargs):
"""
Run a Tahoe-LAFS CLI command specified as bytes.
Most code should prefer ``run_cli_unicode`` which deals with all the
necessary encoding considerations. This helper still exists so that novel
misconfigurations can be explicitly tested (for example, receiving UTF-8
bytes when the system encoding claims to be ASCII).
:param bytes verb: The command to run. For example, ``b"create-node"``.
:param [bytes] args: The arguments to pass to the command. For example,
``(b"--hostname=localhost",)``.
:param [bytes] nodeargs: Extra arguments to pass to the Tahoe executable
before ``verb``.
:param bytes stdin: Text to pass to the command via stdin.
:param NoneType|str encoding: The name of an encoding which stdout and
stderr will be configured to use. ``None`` means stdout and stderr
will accept bytes and unicode and use the default system encoding for
translating between them.
"""
nodeargs = kwargs.pop("nodeargs", [])
encoding = kwargs.pop("encoding", None)
precondition(
all(isinstance(arg, bytes) for arg in [verb] + nodeargs + list(args)),
"arguments to run_cli must be bytes -- convert using unicode_to_argv",
verb=verb,
args=args,
nodeargs=nodeargs,
)
argv = nodeargs + [verb] + list(args) argv = nodeargs + [verb] + list(args)
stdin = StringIO(kwargs.get("stdin", "")) stdin = kwargs.get("stdin", "")
stdout = StringIO() if encoding is None:
stderr = StringIO() # The original behavior, the Python 2 behavior, is to accept either
# bytes or unicode and try to automatically encode or decode as
# necessary. This works okay for ASCII and if LANG is set
# appropriately. These aren't great constraints so we should move
# away from this behavior.
stdout = StringIO()
stderr = StringIO()
else:
# The new behavior, the Python 3 behavior, is to accept unicode and
# encode it using a specific encoding. For older versions of Python
# 3, the encoding is determined from LANG (bad) but for newer Python
# 3, the encoding is always utf-8 (good). Tests can pass in different
# encodings to exercise different behaviors.
stdout = TextIOWrapper(BytesIO(), encoding)
stderr = TextIOWrapper(BytesIO(), encoding)
d = defer.succeed(argv) d = defer.succeed(argv)
d.addCallback(runner.parse_or_exit_with_explanation, stdout=stdout, stderr=stderr, stdin=stdin) d.addCallback(runner.parse_or_exit_with_explanation, stdout=stdout, stderr=stderr, stdin=stdin)
d.addCallback( d.addCallback(
@ -52,13 +108,65 @@ def run_cli(verb, *args, **kwargs):
stderr=stderr, stderr=stderr,
) )
def _done(rc): def _done(rc):
return 0, stdout.getvalue(), stderr.getvalue() return 0, _getvalue(stdout), _getvalue(stderr)
def _err(f): def _err(f):
f.trap(SystemExit) f.trap(SystemExit)
return f.value.code, stdout.getvalue(), stderr.getvalue() return f.value.code, _getvalue(stdout), _getvalue(stderr)
d.addCallbacks(_done, _err) d.addCallbacks(_done, _err)
return d return d
def run_cli_unicode(verb, argv, nodeargs=None, stdin=None, encoding=None):
"""
Run a Tahoe-LAFS CLI command.
:param unicode verb: The command to run. For example, ``u"create-node"``.
:param [unicode] argv: The arguments to pass to the command. For example,
``[u"--hostname=localhost"]``.
:param [unicode] nodeargs: Extra arguments to pass to the Tahoe executable
before ``verb``.
:param unicode stdin: Text to pass to the command via stdin.
:param NoneType|str encoding: The name of an encoding to use for all
bytes/unicode conversions necessary *and* the encoding to cause stdio
to declare with its ``encoding`` attribute. ``None`` means ASCII will
be used and no declaration will be made at all.
"""
if nodeargs is None:
nodeargs = []
precondition(
all(isinstance(arg, unicode) for arg in [verb] + nodeargs + argv),
"arguments to run_cli_unicode must be unicode",
verb=verb,
nodeargs=nodeargs,
argv=argv,
)
codec = encoding or "ascii"
encode = lambda t: None if t is None else t.encode(codec)
d = run_cli_bytes(
encode(verb),
nodeargs=list(encode(arg) for arg in nodeargs),
stdin=encode(stdin),
encoding=encoding,
*list(encode(arg) for arg in argv)
)
def maybe_decode(result):
code, stdout, stderr = result
if isinstance(stdout, bytes):
stdout = stdout.decode(codec)
if isinstance(stderr, bytes):
stderr = stderr.decode(codec)
return code, stdout, stderr
d.addCallback(maybe_decode)
return d
run_cli = run_cli_bytes
def parse_cli(*argv): def parse_cli(*argv):
# This parses the CLI options (synchronously), and returns the Options # This parses the CLI options (synchronously), and returns the Options
# argument, or throws usage.UsageError if something went wrong. # argument, or throws usage.UsageError if something went wrong.

View File

@ -239,7 +239,7 @@ def make_peer(s, i):
peerid = base32.b2a(tagged_hash(b"peerid", b"%d" % i)[:20]) peerid = base32.b2a(tagged_hash(b"peerid", b"%d" % i)[:20])
fss = FakeStorageServer(peerid, s) fss = FakeStorageServer(peerid, s)
ann = { ann = {
"anonymous-storage-FURL": b"pb://%s@nowhere/fake" % (peerid,), "anonymous-storage-FURL": "pb://%s@nowhere/fake" % (str(peerid, "utf-8"),),
"permutation-seed-base32": peerid, "permutation-seed-base32": peerid,
} }
return Peer(peerid=peerid, storage_server=fss, announcement=ann) return Peer(peerid=peerid, storage_server=fss, announcement=ann)

View File

@ -156,7 +156,7 @@ class WebResultsRendering(unittest.TestCase):
for (key_s, binary_tubid, nickname) in servers: for (key_s, binary_tubid, nickname) in servers:
server_id = key_s server_id = key_s
tubid_b32 = base32.b2a(binary_tubid) tubid_b32 = base32.b2a(binary_tubid)
furl = b"pb://%s@nowhere/fake" % tubid_b32 furl = "pb://%s@nowhere/fake" % str(tubid_b32, "utf-8")
ann = { "version": 0, ann = { "version": 0,
"service-name": "storage", "service-name": "storage",
"anonymous-storage-FURL": furl, "anonymous-storage-FURL": furl,

View File

@ -88,7 +88,7 @@ from .strategies import (
write_capabilities, write_capabilities,
) )
SOME_FURL = b"pb://abcde@nowhere/fake" SOME_FURL = "pb://abcde@nowhere/fake"
BASECONFIG = "[client]\n" BASECONFIG = "[client]\n"

View File

@ -216,9 +216,9 @@ class Client(AsyncTestCase):
def _received(key_s, ann): def _received(key_s, ann):
announcements.append( (key_s, ann) ) announcements.append( (key_s, ann) )
ic1.subscribe_to("storage", _received) ic1.subscribe_to("storage", _received)
furl1 = b"pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:36106/gydnp" furl1 = "pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:36106/gydnp"
furl1a = b"pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:7777/gydnp" furl1a = "pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:7777/gydnp"
furl2 = b"pb://ttwwooyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:36106/ttwwoo" furl2 = "pb://ttwwooyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:36106/ttwwoo"
private_key, public_key = ed25519.create_signing_keypair() private_key, public_key = ed25519.create_signing_keypair()
public_key_str = ed25519.string_from_verifying_key(public_key) public_key_str = ed25519.string_from_verifying_key(public_key)
@ -242,7 +242,7 @@ class Client(AsyncTestCase):
self.failUnlessEqual(len(announcements), 1) self.failUnlessEqual(len(announcements), 1)
key_s,ann = announcements[0] key_s,ann = announcements[0]
self.failUnlessEqual(key_s, pubkey_s) self.failUnlessEqual(key_s, pubkey_s)
self.failUnlessEqual(ensure_binary(ann["anonymous-storage-FURL"]), furl1) self.failUnlessEqual(ann["anonymous-storage-FURL"], furl1)
self.failUnlessEqual(ann["my-version"], "ver23") self.failUnlessEqual(ann["my-version"], "ver23")
d.addCallback(_then1) d.addCallback(_then1)
@ -276,7 +276,7 @@ class Client(AsyncTestCase):
self.failUnlessEqual(len(announcements), 2) self.failUnlessEqual(len(announcements), 2)
key_s,ann = announcements[-1] key_s,ann = announcements[-1]
self.failUnlessEqual(key_s, pubkey_s) self.failUnlessEqual(key_s, pubkey_s)
self.failUnlessEqual(ensure_binary(ann["anonymous-storage-FURL"]), furl1) self.failUnlessEqual(ann["anonymous-storage-FURL"], furl1)
self.failUnlessEqual(ann["my-version"], "ver24") self.failUnlessEqual(ann["my-version"], "ver24")
d.addCallback(_then3) d.addCallback(_then3)
@ -288,7 +288,7 @@ class Client(AsyncTestCase):
self.failUnlessEqual(len(announcements), 3) self.failUnlessEqual(len(announcements), 3)
key_s,ann = announcements[-1] key_s,ann = announcements[-1]
self.failUnlessEqual(key_s, pubkey_s) self.failUnlessEqual(key_s, pubkey_s)
self.failUnlessEqual(ensure_binary(ann["anonymous-storage-FURL"]), furl1a) self.failUnlessEqual(ann["anonymous-storage-FURL"], furl1a)
self.failUnlessEqual(ann["my-version"], "ver23") self.failUnlessEqual(ann["my-version"], "ver23")
d.addCallback(_then4) d.addCallback(_then4)
@ -304,7 +304,7 @@ class Client(AsyncTestCase):
self.failUnlessEqual(len(announcements2), 1) self.failUnlessEqual(len(announcements2), 1)
key_s,ann = announcements2[-1] key_s,ann = announcements2[-1]
self.failUnlessEqual(key_s, pubkey_s) self.failUnlessEqual(key_s, pubkey_s)
self.failUnlessEqual(ensure_binary(ann["anonymous-storage-FURL"]), furl1a) self.failUnlessEqual(ann["anonymous-storage-FURL"], furl1a)
self.failUnlessEqual(ann["my-version"], "ver23") self.failUnlessEqual(ann["my-version"], "ver23")
d.addCallback(_then5) d.addCallback(_then5)
return d return d
@ -316,7 +316,7 @@ class Server(AsyncTestCase):
"introducer.furl", u"my_nickname", "introducer.furl", u"my_nickname",
"ver23", "oldest_version", realseq, "ver23", "oldest_version", realseq,
FilePath(self.mktemp())) FilePath(self.mktemp()))
furl1 = b"pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:36106/gydnp" furl1 = "pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:36106/gydnp"
private_key, _ = ed25519.create_signing_keypair() private_key, _ = ed25519.create_signing_keypair()
@ -414,7 +414,7 @@ class Queue(SystemTestMixin, AsyncTestCase):
c = IntroducerClient(tub2, ifurl, c = IntroducerClient(tub2, ifurl,
u"nickname", "version", "oldest", fakeseq, u"nickname", "version", "oldest", fakeseq,
FilePath(self.mktemp())) FilePath(self.mktemp()))
furl1 = b"pb://onug64tu@127.0.0.1:123/short" # base32("short") furl1 = "pb://onug64tu@127.0.0.1:123/short" # base32("short")
private_key, _ = ed25519.create_signing_keypair() private_key, _ = ed25519.create_signing_keypair()
d = introducer.disownServiceParent() d = introducer.disownServiceParent()
@ -436,7 +436,7 @@ class Queue(SystemTestMixin, AsyncTestCase):
def _done(ign): def _done(ign):
v = introducer.get_announcements()[0] v = introducer.get_announcements()[0]
furl = v.announcement["anonymous-storage-FURL"] furl = v.announcement["anonymous-storage-FURL"]
self.failUnlessEqual(ensure_binary(furl), furl1) self.failUnlessEqual(furl, furl1)
d.addCallback(_done) d.addCallback(_done)
# now let the ack get back # now let the ack get back
@ -462,7 +462,7 @@ class SystemTest(SystemTestMixin, AsyncTestCase):
iff = os.path.join(self.basedir, "introducer.furl") iff = os.path.join(self.basedir, "introducer.furl")
tub = self.central_tub tub = self.central_tub
ifurl = self.central_tub.registerReference(introducer, furlFile=iff) ifurl = self.central_tub.registerReference(introducer, furlFile=iff)
self.introducer_furl = ifurl.encode("utf-8") self.introducer_furl = ifurl
# we have 5 clients who publish themselves as storage servers, and a # we have 5 clients who publish themselves as storage servers, and a
# sixth which does which not. All 6 clients subscriber to hear about # sixth which does which not. All 6 clients subscriber to hear about
@ -503,7 +503,7 @@ class SystemTest(SystemTestMixin, AsyncTestCase):
subscribing_clients.append(c) subscribing_clients.append(c)
expected_announcements[i] += 1 # all expect a 'storage' announcement expected_announcements[i] += 1 # all expect a 'storage' announcement
node_furl = tub.registerReference(Referenceable()).encode("utf-8") node_furl = tub.registerReference(Referenceable())
private_key, public_key = ed25519.create_signing_keypair() private_key, public_key = ed25519.create_signing_keypair()
public_key_str = ed25519.string_from_verifying_key(public_key) public_key_str = ed25519.string_from_verifying_key(public_key)
privkeys[i] = private_key privkeys[i] = private_key
@ -520,7 +520,7 @@ class SystemTest(SystemTestMixin, AsyncTestCase):
if i == 2: if i == 2:
# also publish something that nobody cares about # also publish something that nobody cares about
boring_furl = tub.registerReference(Referenceable()).encode("utf-8") boring_furl = tub.registerReference(Referenceable())
c.publish("boring", make_ann(boring_furl), private_key) c.publish("boring", make_ann(boring_furl), private_key)
c.setServiceParent(self.parent) c.setServiceParent(self.parent)
@ -658,7 +658,7 @@ class SystemTest(SystemTestMixin, AsyncTestCase):
self.create_tub(self.central_portnum) self.create_tub(self.central_portnum)
newfurl = self.central_tub.registerReference(self.the_introducer, newfurl = self.central_tub.registerReference(self.the_introducer,
furlFile=iff) furlFile=iff)
assert ensure_binary(newfurl) == self.introducer_furl assert newfurl == self.introducer_furl
d.addCallback(_restart_introducer_tub) d.addCallback(_restart_introducer_tub)
d.addCallback(_wait_for_connected) d.addCallback(_wait_for_connected)
@ -710,7 +710,7 @@ class SystemTest(SystemTestMixin, AsyncTestCase):
self.the_introducer = introducer self.the_introducer = introducer
newfurl = self.central_tub.registerReference(self.the_introducer, newfurl = self.central_tub.registerReference(self.the_introducer,
furlFile=iff) furlFile=iff)
assert ensure_binary(newfurl) == self.introducer_furl assert newfurl == self.introducer_furl
d.addCallback(_restart_introducer) d.addCallback(_restart_introducer)
d.addCallback(_wait_for_connected) d.addCallback(_wait_for_connected)
@ -744,8 +744,6 @@ class SystemTest(SystemTestMixin, AsyncTestCase):
class FakeRemoteReference(object): class FakeRemoteReference(object):
def notifyOnDisconnect(self, *args, **kwargs): pass def notifyOnDisconnect(self, *args, **kwargs): pass
def getRemoteTubID(self): return "62ubehyunnyhzs7r6vdonnm2hpi52w6y" def getRemoteTubID(self): return "62ubehyunnyhzs7r6vdonnm2hpi52w6y"
def getLocationHints(self): return ["tcp:here.example.com:1234",
"tcp:there.example.com2345"]
def getPeer(self): return address.IPv4Address("TCP", "remote.example.com", def getPeer(self): return address.IPv4Address("TCP", "remote.example.com",
3456) 3456)
@ -756,7 +754,7 @@ class ClientInfo(AsyncTestCase):
client_v2 = IntroducerClient(tub, introducer_furl, NICKNAME % u"v2", client_v2 = IntroducerClient(tub, introducer_furl, NICKNAME % u"v2",
"my_version", "oldest", "my_version", "oldest",
fakeseq, FilePath(self.mktemp())) fakeseq, FilePath(self.mktemp()))
#furl1 = b"pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:0/swissnum" #furl1 = "pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:0/swissnum"
#ann_s = make_ann_t(client_v2, furl1, None, 10) #ann_s = make_ann_t(client_v2, furl1, None, 10)
#introducer.remote_publish_v2(ann_s, Referenceable()) #introducer.remote_publish_v2(ann_s, Referenceable())
subscriber = FakeRemoteReference() subscriber = FakeRemoteReference()
@ -777,7 +775,7 @@ class Announcements(AsyncTestCase):
client_v2 = IntroducerClient(tub, introducer_furl, u"nick-v2", client_v2 = IntroducerClient(tub, introducer_furl, u"nick-v2",
"my_version", "oldest", "my_version", "oldest",
fakeseq, FilePath(self.mktemp())) fakeseq, FilePath(self.mktemp()))
furl1 = b"pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:0/swissnum" furl1 = "pb://62ubehyunnyhzs7r6vdonnm2hpi52w6y@127.0.0.1:0/swissnum"
private_key, public_key = ed25519.create_signing_keypair() private_key, public_key = ed25519.create_signing_keypair()
public_key_str = remove_prefix(ed25519.string_from_verifying_key(public_key), b"pub-") public_key_str = remove_prefix(ed25519.string_from_verifying_key(public_key), b"pub-")
@ -792,7 +790,7 @@ class Announcements(AsyncTestCase):
self.failUnlessEqual(a[0].nickname, u"nick-v2") self.failUnlessEqual(a[0].nickname, u"nick-v2")
self.failUnlessEqual(a[0].service_name, "storage") self.failUnlessEqual(a[0].service_name, "storage")
self.failUnlessEqual(a[0].version, "my_version") self.failUnlessEqual(a[0].version, "my_version")
self.failUnlessEqual(ensure_binary(a[0].announcement["anonymous-storage-FURL"]), furl1) self.failUnlessEqual(a[0].announcement["anonymous-storage-FURL"], furl1)
def _load_cache(self, cache_filepath): def _load_cache(self, cache_filepath):
with cache_filepath.open() as f: with cache_filepath.open() as f:
@ -825,7 +823,7 @@ class Announcements(AsyncTestCase):
ic = c.introducer_clients[0] ic = c.introducer_clients[0]
private_key, public_key = ed25519.create_signing_keypair() private_key, public_key = ed25519.create_signing_keypair()
public_key_str = remove_prefix(ed25519.string_from_verifying_key(public_key), b"pub-") public_key_str = remove_prefix(ed25519.string_from_verifying_key(public_key), b"pub-")
furl1 = b"pb://onug64tu@127.0.0.1:123/short" # base32("short") furl1 = "pb://onug64tu@127.0.0.1:123/short" # base32("short")
ann_t = make_ann_t(ic, furl1, private_key, 1) ann_t = make_ann_t(ic, furl1, private_key, 1)
ic.got_announcements([ann_t]) ic.got_announcements([ann_t])
@ -836,12 +834,12 @@ class Announcements(AsyncTestCase):
self.failUnlessEqual(len(announcements), 1) self.failUnlessEqual(len(announcements), 1)
self.failUnlessEqual(ensure_binary(announcements[0]['key_s']), public_key_str) self.failUnlessEqual(ensure_binary(announcements[0]['key_s']), public_key_str)
ann = announcements[0]["ann"] ann = announcements[0]["ann"]
self.failUnlessEqual(ensure_binary(ann["anonymous-storage-FURL"]), furl1) self.failUnlessEqual(ann["anonymous-storage-FURL"], furl1)
self.failUnlessEqual(ann["seqnum"], 1) self.failUnlessEqual(ann["seqnum"], 1)
# a new announcement that replaces the first should replace the # a new announcement that replaces the first should replace the
# cached entry, not duplicate it # cached entry, not duplicate it
furl2 = furl1 + b"er" furl2 = furl1 + "er"
ann_t2 = make_ann_t(ic, furl2, private_key, 2) ann_t2 = make_ann_t(ic, furl2, private_key, 2)
ic.got_announcements([ann_t2]) ic.got_announcements([ann_t2])
yield flushEventualQueue() yield flushEventualQueue()
@ -849,14 +847,14 @@ class Announcements(AsyncTestCase):
self.failUnlessEqual(len(announcements), 1) self.failUnlessEqual(len(announcements), 1)
self.failUnlessEqual(ensure_binary(announcements[0]['key_s']), public_key_str) self.failUnlessEqual(ensure_binary(announcements[0]['key_s']), public_key_str)
ann = announcements[0]["ann"] ann = announcements[0]["ann"]
self.failUnlessEqual(ensure_binary(ann["anonymous-storage-FURL"]), furl2) self.failUnlessEqual(ann["anonymous-storage-FURL"], furl2)
self.failUnlessEqual(ann["seqnum"], 2) self.failUnlessEqual(ann["seqnum"], 2)
# but a third announcement with a different key should add to the # but a third announcement with a different key should add to the
# cache # cache
private_key2, public_key2 = ed25519.create_signing_keypair() private_key2, public_key2 = ed25519.create_signing_keypair()
public_key_str2 = remove_prefix(ed25519.string_from_verifying_key(public_key2), b"pub-") public_key_str2 = remove_prefix(ed25519.string_from_verifying_key(public_key2), b"pub-")
furl3 = b"pb://onug64tu@127.0.0.1:456/short" furl3 = "pb://onug64tu@127.0.0.1:456/short"
ann_t3 = make_ann_t(ic, furl3, private_key2, 1) ann_t3 = make_ann_t(ic, furl3, private_key2, 1)
ic.got_announcements([ann_t3]) ic.got_announcements([ann_t3])
yield flushEventualQueue() yield flushEventualQueue()
@ -866,7 +864,7 @@ class Announcements(AsyncTestCase):
self.failUnlessEqual(set([public_key_str, public_key_str2]), self.failUnlessEqual(set([public_key_str, public_key_str2]),
set([ensure_binary(a["key_s"]) for a in announcements])) set([ensure_binary(a["key_s"]) for a in announcements]))
self.failUnlessEqual(set([furl2, furl3]), self.failUnlessEqual(set([furl2, furl3]),
set([ensure_binary(a["ann"]["anonymous-storage-FURL"]) set([a["ann"]["anonymous-storage-FURL"]
for a in announcements])) for a in announcements]))
# test loading # test loading
@ -882,9 +880,9 @@ class Announcements(AsyncTestCase):
yield flushEventualQueue() yield flushEventualQueue()
self.failUnless(public_key_str in announcements) self.failUnless(public_key_str in announcements)
self.failUnlessEqual(ensure_binary(announcements[public_key_str]["anonymous-storage-FURL"]), self.failUnlessEqual(announcements[public_key_str]["anonymous-storage-FURL"],
furl2) furl2)
self.failUnlessEqual(ensure_binary(announcements[public_key_str2]["anonymous-storage-FURL"]), self.failUnlessEqual(announcements[public_key_str2]["anonymous-storage-FURL"],
furl3) furl3)
c2 = yield create_client(basedir.path) c2 = yield create_client(basedir.path)
@ -999,10 +997,10 @@ class DecodeFurl(SyncTestCase):
def test_decode(self): def test_decode(self):
# make sure we have a working base64.b32decode. The one in # make sure we have a working base64.b32decode. The one in
# python2.4.[01] was broken. # python2.4.[01] was broken.
furl = b'pb://t5g7egomnnktbpydbuijt6zgtmw4oqi5@127.0.0.1:51857/hfzv36i' furl = 'pb://t5g7egomnnktbpydbuijt6zgtmw4oqi5@127.0.0.1:51857/hfzv36i'
m = re.match(br'pb://(\w+)@', furl) m = re.match(r'pb://(\w+)@', furl)
assert m assert m
nodeid = b32decode(m.group(1).upper()) nodeid = b32decode(m.group(1).upper().encode("ascii"))
self.failUnlessEqual(nodeid, b"\x9fM\xf2\x19\xcckU0\xbf\x03\r\x10\x99\xfb&\x9b-\xc7A\x1d") self.failUnlessEqual(nodeid, b"\x9fM\xf2\x19\xcckU0\xbf\x03\r\x10\x99\xfb&\x9b-\xc7A\x1d")
class Signatures(SyncTestCase): class Signatures(SyncTestCase):
@ -1043,7 +1041,7 @@ class Signatures(SyncTestCase):
mock_tub = Mock() mock_tub = Mock()
ic = IntroducerClient( ic = IntroducerClient(
mock_tub, mock_tub,
b"pb://", "pb://",
u"fake_nick", u"fake_nick",
"0.0.0", "0.0.0",
"1.2.3", "1.2.3",

View File

@ -1,5 +1,15 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""
Ported to Python 3.
"""
from __future__ import print_function from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
from allmydata.test import common from allmydata.test import common
from allmydata.monitor import Monitor from allmydata.monitor import Monitor
@ -62,7 +72,7 @@ class RepairTestMixin(object):
c0 = self.g.clients[0] c0 = self.g.clients[0]
c1 = self.g.clients[1] c1 = self.g.clients[1]
c0.encoding_params['max_segment_size'] = 12 c0.encoding_params['max_segment_size'] = 12
d = c0.upload(upload.Data(common.TEST_DATA, convergence="")) d = c0.upload(upload.Data(common.TEST_DATA, convergence=b""))
def _stash_uri(ur): def _stash_uri(ur):
self.uri = ur.get_uri() self.uri = ur.get_uri()
self.c0_filenode = c0.create_node_from_uri(ur.get_uri()) self.c0_filenode = c0.create_node_from_uri(ur.get_uri())
@ -464,7 +474,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
# previously-deleted share #2. # previously-deleted share #2.
d.addCallback(lambda ignored: d.addCallback(lambda ignored:
self.delete_shares_numbered(self.uri, range(3, 10+1))) self.delete_shares_numbered(self.uri, list(range(3, 10+1))))
d.addCallback(lambda ignored: download_to_data(self.c1_filenode)) d.addCallback(lambda ignored: download_to_data(self.c1_filenode))
d.addCallback(lambda newdata: d.addCallback(lambda newdata:
self.failUnlessEqual(newdata, common.TEST_DATA)) self.failUnlessEqual(newdata, common.TEST_DATA))
@ -476,7 +486,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
self.set_up_grid(num_clients=2) self.set_up_grid(num_clients=2)
d = self.upload_and_stash() d = self.upload_and_stash()
d.addCallback(lambda ignored: d.addCallback(lambda ignored:
self.delete_shares_numbered(self.uri, range(7))) self.delete_shares_numbered(self.uri, list(range(7))))
d.addCallback(lambda ignored: self._stash_counts()) d.addCallback(lambda ignored: self._stash_counts())
d.addCallback(lambda ignored: d.addCallback(lambda ignored:
self.c0_filenode.check_and_repair(Monitor(), self.c0_filenode.check_and_repair(Monitor(),
@ -509,7 +519,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
# previously-deleted share #2. # previously-deleted share #2.
d.addCallback(lambda ignored: d.addCallback(lambda ignored:
self.delete_shares_numbered(self.uri, range(3, 10+1))) self.delete_shares_numbered(self.uri, list(range(3, 10+1))))
d.addCallback(lambda ignored: download_to_data(self.c1_filenode)) d.addCallback(lambda ignored: download_to_data(self.c1_filenode))
d.addCallback(lambda newdata: d.addCallback(lambda newdata:
self.failUnlessEqual(newdata, common.TEST_DATA)) self.failUnlessEqual(newdata, common.TEST_DATA))
@ -527,7 +537,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
# distributing the shares widely enough to satisfy the default # distributing the shares widely enough to satisfy the default
# happiness setting. # happiness setting.
def _delete_some_servers(ignored): def _delete_some_servers(ignored):
for i in xrange(7): for i in range(7):
self.g.remove_server(self.g.servers_by_number[i].my_nodeid) self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
assert len(self.g.servers_by_number) == 3 assert len(self.g.servers_by_number) == 3
@ -640,7 +650,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
# downloading and has the right contents. This can't work # downloading and has the right contents. This can't work
# unless it has already repaired the previously-corrupted share. # unless it has already repaired the previously-corrupted share.
def _then_delete_7_and_try_a_download(unused=None): def _then_delete_7_and_try_a_download(unused=None):
shnums = range(10) shnums = list(range(10))
shnums.remove(shnum) shnums.remove(shnum)
random.shuffle(shnums) random.shuffle(shnums)
for sharenum in shnums[:7]: for sharenum in shnums[:7]:
@ -679,10 +689,10 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
self.basedir = "repairer/Repairer/test_tiny_reads" self.basedir = "repairer/Repairer/test_tiny_reads"
self.set_up_grid() self.set_up_grid()
c0 = self.g.clients[0] c0 = self.g.clients[0]
DATA = "a"*135 DATA = b"a"*135
c0.encoding_params['k'] = 22 c0.encoding_params['k'] = 22
c0.encoding_params['n'] = 66 c0.encoding_params['n'] = 66
d = c0.upload(upload.Data(DATA, convergence="")) d = c0.upload(upload.Data(DATA, convergence=b""))
def _then(ur): def _then(ur):
self.uri = ur.get_uri() self.uri = ur.get_uri()
self.delete_shares_numbered(self.uri, [0]) self.delete_shares_numbered(self.uri, [0])

View File

@ -142,9 +142,8 @@ class BinTahoe(common_util.SignalMixin, unittest.TestCase, RunBinTahoeMixin):
class CreateNode(unittest.TestCase): class CreateNode(unittest.TestCase):
# exercise "tahoe create-node", create-introducer, # exercise "tahoe create-node", create-introducer, and
# create-key-generator, and create-stats-gatherer, by calling the # create-key-generator by calling the corresponding code as a subroutine.
# corresponding code as a subroutine.
def workdir(self, name): def workdir(self, name):
basedir = os.path.join("test_runner", "CreateNode", name) basedir = os.path.join("test_runner", "CreateNode", name)
@ -243,48 +242,11 @@ class CreateNode(unittest.TestCase):
def test_introducer(self): def test_introducer(self):
self.do_create("introducer", "--hostname=127.0.0.1") self.do_create("introducer", "--hostname=127.0.0.1")
def test_stats_gatherer(self):
self.do_create("stats-gatherer", "--hostname=127.0.0.1")
def test_subcommands(self): def test_subcommands(self):
# no arguments should trigger a command listing, via UsageError # no arguments should trigger a command listing, via UsageError
self.failUnlessRaises(usage.UsageError, parse_cli, self.failUnlessRaises(usage.UsageError, parse_cli,
) )
@inlineCallbacks
def test_stats_gatherer_good_args(self):
rc,out,err = yield run_cli("create-stats-gatherer", "--hostname=foo",
self.mktemp())
self.assertEqual(rc, 0)
rc,out,err = yield run_cli("create-stats-gatherer",
"--location=tcp:foo:1234",
"--port=tcp:1234", self.mktemp())
self.assertEqual(rc, 0)
def test_stats_gatherer_bad_args(self):
def _test(args):
argv = args.split()
self.assertRaises(usage.UsageError, parse_cli, *argv)
# missing hostname/location/port
_test("create-stats-gatherer D")
# missing port
_test("create-stats-gatherer --location=foo D")
# missing location
_test("create-stats-gatherer --port=foo D")
# can't provide both
_test("create-stats-gatherer --hostname=foo --port=foo D")
# can't provide both
_test("create-stats-gatherer --hostname=foo --location=foo D")
# can't provide all three
_test("create-stats-gatherer --hostname=foo --location=foo --port=foo D")
class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin, class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
RunBinTahoeMixin): RunBinTahoeMixin):

View File

@ -17,7 +17,6 @@ from json import (
) )
import hashlib import hashlib
from mock import Mock
from fixtures import ( from fixtures import (
TempDir, TempDir,
) )
@ -44,12 +43,20 @@ from hyperlink import (
URL, URL,
) )
import attr
from twisted.internet.interfaces import (
IStreamClientEndpoint,
)
from twisted.application.service import ( from twisted.application.service import (
Service, Service,
) )
from twisted.trial import unittest from twisted.trial import unittest
from twisted.internet.defer import succeed, inlineCallbacks from twisted.internet.defer import (
Deferred,
inlineCallbacks,
)
from twisted.python.filepath import ( from twisted.python.filepath import (
FilePath, FilePath,
) )
@ -57,7 +64,11 @@ from twisted.python.filepath import (
from foolscap.api import ( from foolscap.api import (
Tub, Tub,
) )
from foolscap.ipb import (
IConnectionHintHandler,
)
from .no_network import LocalWrapper
from .common import ( from .common import (
EMPTY_CLIENT_CONFIG, EMPTY_CLIENT_CONFIG,
SyncTestCase, SyncTestCase,
@ -65,6 +76,7 @@ from .common import (
UseTestPlugins, UseTestPlugins,
UseNode, UseNode,
SameProcessStreamEndpointAssigner, SameProcessStreamEndpointAssigner,
MemoryIntroducerClient,
) )
from .common_web import ( from .common_web import (
do_http, do_http,
@ -83,12 +95,15 @@ from allmydata.storage_client import (
_FoolscapStorage, _FoolscapStorage,
_NullStorage, _NullStorage,
) )
from ..storage.server import (
StorageServer,
)
from allmydata.interfaces import ( from allmydata.interfaces import (
IConnectionStatus, IConnectionStatus,
IStorageServer, IStorageServer,
) )
SOME_FURL = b"pb://abcde@nowhere/fake" SOME_FURL = "pb://abcde@nowhere/fake"
class NativeStorageServerWithVersion(NativeStorageServer): class NativeStorageServerWithVersion(NativeStorageServer):
def __init__(self, version): def __init__(self, version):
@ -295,7 +310,7 @@ class PluginMatchedAnnouncement(SyncTestCase):
# notice how the announcement is for a different storage plugin # notice how the announcement is for a different storage plugin
# than the one that is enabled. # than the one that is enabled.
u"name": u"tahoe-lafs-dummy-v2", u"name": u"tahoe-lafs-dummy-v2",
u"storage-server-FURL": SOME_FURL.decode("ascii"), u"storage-server-FURL": SOME_FURL,
}], }],
} }
self.publish(server_id, ann, self.introducer_client) self.publish(server_id, ann, self.introducer_client)
@ -323,7 +338,7 @@ class PluginMatchedAnnouncement(SyncTestCase):
u"storage-options": [{ u"storage-options": [{
# and this announcement is for a plugin with a matching name # and this announcement is for a plugin with a matching name
u"name": plugin_name, u"name": plugin_name,
u"storage-server-FURL": SOME_FURL.decode("ascii"), u"storage-server-FURL": SOME_FURL,
}], }],
} }
self.publish(server_id, ann, self.introducer_client) self.publish(server_id, ann, self.introducer_client)
@ -374,7 +389,7 @@ class PluginMatchedAnnouncement(SyncTestCase):
u"storage-options": [{ u"storage-options": [{
# and this announcement is for a plugin with a matching name # and this announcement is for a plugin with a matching name
u"name": plugin_name, u"name": plugin_name,
u"storage-server-FURL": SOME_FURL.decode("ascii"), u"storage-server-FURL": SOME_FURL,
}], }],
} }
self.publish(server_id, ann, self.introducer_client) self.publish(server_id, ann, self.introducer_client)
@ -505,14 +520,68 @@ class StoragePluginWebPresence(AsyncTestCase):
) )
def make_broker(tub_maker=lambda h: Mock()): _aCertPEM = Tub().myCertificate.dumpPEM()
def new_tub():
"""
Make a new ``Tub`` with a hard-coded private key.
"""
# Use a private key / certificate generated by Tub how it wants. But just
# re-use the same one every time so we don't waste a lot of time
# generating them over and over in the tests.
return Tub(certData=_aCertPEM)
def make_broker(tub_maker=None):
""" """
Create a ``StorageFarmBroker`` with the given tub maker and an empty Create a ``StorageFarmBroker`` with the given tub maker and an empty
client configuration. client configuration.
""" """
if tub_maker is None:
tub_maker = lambda handler_overrides: new_tub()
return StorageFarmBroker(True, tub_maker, EMPTY_CLIENT_CONFIG) return StorageFarmBroker(True, tub_maker, EMPTY_CLIENT_CONFIG)
@implementer(IStreamClientEndpoint)
@attr.s
class SpyEndpoint(object):
"""
Observe and record connection attempts.
:ivar list _append: A callable that accepts two-tuples. For each
attempted connection, it will be called with ``Deferred`` that was
returned and the ``Factory`` that was passed in.
"""
_append = attr.ib()
def connect(self, factory):
"""
Record the connection attempt.
:return: A ``Deferred`` that ``SpyEndpoint`` will not fire.
"""
d = Deferred()
self._append((d, factory))
return d
@implementer(IConnectionHintHandler)
@attr.s
class SpyHandler(object):
"""
A Foolscap connection hint handler for the "spy" hint type. Connections
are handled by just observing and recording them.
:ivar list _connects: A list containing one element for each connection
attempted with this handler. Each element is a two-tuple of the
``Deferred`` that was returned from ``connect`` and the factory that
was passed to ``connect``.
"""
_connects = attr.ib(default=attr.Factory(list))
def hint_to_endpoint(self, hint, reactor, update_status):
return (SpyEndpoint(self._connects.append), hint)
class TestStorageFarmBroker(unittest.TestCase): class TestStorageFarmBroker(unittest.TestCase):
def test_static_servers(self): def test_static_servers(self):
@ -525,7 +594,7 @@ storage:
ann: ann:
anonymous-storage-FURL: {furl} anonymous-storage-FURL: {furl}
permutation-seed-base32: aaaaaaaaaaaaaaaaaaaaaaaa permutation-seed-base32: aaaaaaaaaaaaaaaaaaaaaaaa
""".format(furl=SOME_FURL.decode("utf-8")) """.format(furl=SOME_FURL)
servers = yamlutil.safe_load(servers_yaml) servers = yamlutil.safe_load(servers_yaml)
permseed = base32.a2b(b"aaaaaaaaaaaaaaaaaaaaaaaa") permseed = base32.a2b(b"aaaaaaaaaaaaaaaaaaaaaaaa")
broker.set_static_servers(servers["storage"]) broker.set_static_servers(servers["storage"])
@ -541,7 +610,7 @@ storage:
ann2 = { ann2 = {
"service-name": "storage", "service-name": "storage",
"anonymous-storage-FURL": "pb://{}@nowhere/fake2".format(base32.b2a(b"1")), "anonymous-storage-FURL": "pb://{}@nowhere/fake2".format(str(base32.b2a(b"1"), "utf-8")),
"permutation-seed-base32": "bbbbbbbbbbbbbbbbbbbbbbbb", "permutation-seed-base32": "bbbbbbbbbbbbbbbbbbbbbbbb",
} }
broker._got_announcement(key_s, ann2) broker._got_announcement(key_s, ann2)
@ -585,18 +654,38 @@ storage:
@inlineCallbacks @inlineCallbacks
def test_threshold_reached(self): def test_threshold_reached(self):
introducer = Mock() """
``StorageFarmBroker.when_connected_enough`` returns a ``Deferred`` which
only fires after the ``StorageFarmBroker`` has established at least as
many connections as requested.
"""
introducer = MemoryIntroducerClient(
new_tub(),
SOME_FURL,
b"",
None,
None,
None,
None,
)
new_tubs = [] new_tubs = []
def make_tub(*args, **kwargs): def make_tub(*args, **kwargs):
return new_tubs.pop() return new_tubs.pop()
broker = make_broker(make_tub) broker = make_broker(make_tub)
# Start the broker so that it will start Tubs attached to it so they
# will attempt to make connections as necessary so that we can observe
# those connections.
broker.startService()
self.addCleanup(broker.stopService)
done = broker.when_connected_enough(5) done = broker.when_connected_enough(5)
broker.use_introducer(introducer) broker.use_introducer(introducer)
# subscribes to "storage" to learn of new storage nodes # subscribes to "storage" to learn of new storage nodes
subscribe = introducer.mock_calls[0] [subscribe] = introducer.subscribed_to
self.assertEqual(subscribe[0], 'subscribe_to') self.assertEqual(
self.assertEqual(subscribe[1][0], 'storage') subscribe.service_name,
got_announcement = subscribe[1][1] "storage",
)
got_announcement = subscribe.cb
data = { data = {
"service-name": "storage", "service-name": "storage",
@ -605,15 +694,25 @@ storage:
} }
def add_one_server(x): def add_one_server(x):
data["anonymous-storage-FURL"] = b"pb://%s@nowhere/fake" % (base32.b2a(b"%d" % x),) data["anonymous-storage-FURL"] = "pb://%s@spy:nowhere/fake" % (str(base32.b2a(b"%d" % x), "ascii"),)
tub = Mock() tub = new_tub()
connects = []
spy = SpyHandler(connects)
tub.addConnectionHintHandler("spy", spy)
new_tubs.append(tub) new_tubs.append(tub)
got_announcement(b'v0-1234-%d' % x, data) got_announcement(b'v0-1234-%d' % x, data)
self.assertEqual(tub.mock_calls[-1][0], 'connectTo')
got_connection = tub.mock_calls[-1][1][1] self.assertEqual(
rref = Mock() 1, len(connects),
rref.callRemote = Mock(return_value=succeed(1234)) "Expected one connection attempt, got {!r} instead".format(connects),
got_connection(rref) )
# Skip over all the Foolscap negotiation. It's complex with lots
# of pieces and I don't want to figure out how to fake
# it. -exarkun
native = broker.servers[b"v0-1234-%d" % (x,)]
rref = LocalWrapper(StorageServer(self.mktemp(), b"x" * 20))
native._got_connection(rref)
# first 4 shouldn't trigger connected_threashold # first 4 shouldn't trigger connected_threashold
for x in range(4): for x in range(4):

View File

@ -23,7 +23,6 @@ from allmydata.util import log, base32
from allmydata.util.encodingutil import quote_output, unicode_to_argv from allmydata.util.encodingutil import quote_output, unicode_to_argv
from allmydata.util.fileutil import abspath_expanduser_unicode from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.util.consumer import MemoryConsumer, download_to_data from allmydata.util.consumer import MemoryConsumer, download_to_data
from allmydata.stats import StatsGathererService
from allmydata.interfaces import IDirectoryNode, IFileNode, \ from allmydata.interfaces import IDirectoryNode, IFileNode, \
NoSuchChildError, NoSharesError NoSuchChildError, NoSharesError
from allmydata.monitor import Monitor from allmydata.monitor import Monitor
@ -667,9 +666,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
self.sparent = service.MultiService() self.sparent = service.MultiService()
self.sparent.startService() self.sparent.startService()
self.stats_gatherer = None
self.stats_gatherer_furl = None
def tearDown(self): def tearDown(self):
log.msg("shutting down SystemTest services") log.msg("shutting down SystemTest services")
d = self.sparent.stopService() d = self.sparent.stopService()
@ -713,7 +709,7 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
return f.read().strip() return f.read().strip()
@inlineCallbacks @inlineCallbacks
def set_up_nodes(self, NUMCLIENTS=5, use_stats_gatherer=False): def set_up_nodes(self, NUMCLIENTS=5):
""" """
Create an introducer and ``NUMCLIENTS`` client nodes pointed at it. All Create an introducer and ``NUMCLIENTS`` client nodes pointed at it. All
of the nodes are running in this process. of the nodes are running in this process.
@ -726,9 +722,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
:param int NUMCLIENTS: The number of client nodes to create. :param int NUMCLIENTS: The number of client nodes to create.
:param bool use_stats_gatherer: If ``True`` then also create a stats
gatherer and configure the other nodes to use it.
:return: A ``Deferred`` that fires when the nodes have connected to :return: A ``Deferred`` that fires when the nodes have connected to
each other. each other.
""" """
@ -737,33 +730,7 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
self.introducer = yield self._create_introducer() self.introducer = yield self._create_introducer()
self.add_service(self.introducer) self.add_service(self.introducer)
self.introweb_url = self._get_introducer_web() self.introweb_url = self._get_introducer_web()
if use_stats_gatherer:
yield self._set_up_stats_gatherer()
yield self._set_up_client_nodes() yield self._set_up_client_nodes()
if use_stats_gatherer:
yield self._grab_stats()
def _set_up_stats_gatherer(self):
statsdir = self.getdir("stats_gatherer")
fileutil.make_dirs(statsdir)
location_hint, port_endpoint = self.port_assigner.assign(reactor)
fileutil.write(os.path.join(statsdir, "location"), location_hint)
fileutil.write(os.path.join(statsdir, "port"), port_endpoint)
self.stats_gatherer_svc = StatsGathererService(statsdir)
self.stats_gatherer = self.stats_gatherer_svc.stats_gatherer
self.stats_gatherer_svc.setServiceParent(self.sparent)
d = fireEventually()
sgf = os.path.join(statsdir, 'stats_gatherer.furl')
def check_for_furl():
return os.path.exists(sgf)
d.addCallback(lambda junk: self.poll(check_for_furl, timeout=30))
def get_furl(junk):
self.stats_gatherer_furl = file(sgf, 'rb').read().strip()
d.addCallback(get_furl)
return d
@inlineCallbacks @inlineCallbacks
def _set_up_client_nodes(self): def _set_up_client_nodes(self):
@ -833,15 +800,11 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
value = value.encode("utf-8") value = value.encode("utf-8")
config.setdefault(section, {})[feature] = value config.setdefault(section, {})[feature] = value
setclient = partial(setconf, config, which, "client")
setnode = partial(setconf, config, which, "node") setnode = partial(setconf, config, which, "node")
sethelper = partial(setconf, config, which, "helper") sethelper = partial(setconf, config, which, "helper")
setnode("nickname", u"client %d \N{BLACK SMILING FACE}" % (which,)) setnode("nickname", u"client %d \N{BLACK SMILING FACE}" % (which,))
if self.stats_gatherer_furl:
setclient("stats_gatherer.furl", self.stats_gatherer_furl)
tub_location_hint, tub_port_endpoint = self.port_assigner.assign(reactor) tub_location_hint, tub_port_endpoint = self.port_assigner.assign(reactor)
setnode("tub.port", tub_port_endpoint) setnode("tub.port", tub_port_endpoint)
setnode("tub.location", tub_location_hint) setnode("tub.location", tub_location_hint)
@ -872,10 +835,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
fileutil.write(os.path.join(basedir, 'tahoe.cfg'), config) fileutil.write(os.path.join(basedir, 'tahoe.cfg'), config)
return basedir return basedir
def _grab_stats(self):
d = self.stats_gatherer.poll()
return d
def bounce_client(self, num): def bounce_client(self, num):
c = self.clients[num] c = self.clients[num]
d = c.disownServiceParent() d = c.disownServiceParent()
@ -1303,25 +1262,11 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
d.addCallback(_upload_resumable) d.addCallback(_upload_resumable)
def _grab_stats(ignored): def _grab_stats(ignored):
# the StatsProvider doesn't normally publish a FURL: stats = self.clients[0].stats_provider.get_stats()
# instead it passes a live reference to the StatsGatherer s = stats["stats"]
# (if and when it connects). To exercise the remote stats self.failUnlessEqual(s["storage_server.accepting_immutable_shares"], 1)
# interface, we manually publish client0's StatsProvider c = stats["counters"]
# and use client1 to query it. self.failUnless("storage_server.allocate" in c)
sp = self.clients[0].stats_provider
sp_furl = self.clients[0].tub.registerReference(sp)
d = self.clients[1].tub.getReference(sp_furl)
d.addCallback(lambda sp_rref: sp_rref.callRemote("get_stats"))
def _got_stats(stats):
#print("STATS")
#from pprint import pprint
#pprint(stats)
s = stats["stats"]
self.failUnlessEqual(s["storage_server.accepting_immutable_shares"], 1)
c = stats["counters"]
self.failUnless("storage_server.allocate" in c)
d.addCallback(_got_stats)
return d
d.addCallback(_grab_stats) d.addCallback(_grab_stats)
return d return d
@ -1629,7 +1574,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
def test_filesystem(self): def test_filesystem(self):
self.basedir = "system/SystemTest/test_filesystem" self.basedir = "system/SystemTest/test_filesystem"
self.data = LARGE_DATA self.data = LARGE_DATA
d = self.set_up_nodes(use_stats_gatherer=True) d = self.set_up_nodes()
def _new_happy_semantics(ign): def _new_happy_semantics(ign):
for c in self.clients: for c in self.clients:
c.encoding_params['happy'] = 1 c.encoding_params['happy'] = 1
@ -2618,6 +2563,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
def _run_in_subprocess(ignored, verb, *args, **kwargs): def _run_in_subprocess(ignored, verb, *args, **kwargs):
stdin = kwargs.get("stdin") stdin = kwargs.get("stdin")
# XXX https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3548
env = kwargs.get("env", os.environ) env = kwargs.get("env", os.environ)
# Python warnings from the child process don't matter. # Python warnings from the child process don't matter.
env["PYTHONWARNINGS"] = "ignore" env["PYTHONWARNINGS"] = "ignore"

View File

@ -239,7 +239,7 @@ class FakeClient(object):
node_config=EMPTY_CLIENT_CONFIG, node_config=EMPTY_CLIENT_CONFIG,
) )
for (serverid, rref) in servers: for (serverid, rref) in servers:
ann = {"anonymous-storage-FURL": b"pb://%s@nowhere/fake" % base32.b2a(serverid), ann = {"anonymous-storage-FURL": "pb://%s@nowhere/fake" % str(base32.b2a(serverid), "ascii"),
"permutation-seed-base32": base32.b2a(serverid) } "permutation-seed-base32": base32.b2a(serverid) }
self.storage_broker.test_add_rref(serverid, rref, ann) self.storage_broker.test_add_rref(serverid, rref, ann)
self.last_servers = [s[1] for s in servers] self.last_servers = [s[1] for s in servers]

View File

@ -35,6 +35,7 @@ PORTED_MODULES = [
"allmydata.crypto.rsa", "allmydata.crypto.rsa",
"allmydata.crypto.util", "allmydata.crypto.util",
"allmydata.hashtree", "allmydata.hashtree",
"allmydata.immutable.checker",
"allmydata.immutable.downloader", "allmydata.immutable.downloader",
"allmydata.immutable.downloader.common", "allmydata.immutable.downloader.common",
"allmydata.immutable.downloader.fetcher", "allmydata.immutable.downloader.fetcher",
@ -49,9 +50,13 @@ PORTED_MODULES = [
"allmydata.immutable.layout", "allmydata.immutable.layout",
"allmydata.immutable.literal", "allmydata.immutable.literal",
"allmydata.immutable.offloaded", "allmydata.immutable.offloaded",
"allmydata.immutable.repairer",
"allmydata.immutable.upload", "allmydata.immutable.upload",
"allmydata.interfaces", "allmydata.interfaces",
"allmydata.introducer.client",
"allmydata.introducer.common",
"allmydata.introducer.interfaces", "allmydata.introducer.interfaces",
"allmydata.introducer.server",
"allmydata.monitor", "allmydata.monitor",
"allmydata.mutable.checker", "allmydata.mutable.checker",
"allmydata.mutable.common", "allmydata.mutable.common",
@ -151,6 +156,7 @@ PORTED_TEST_MODULES = [
"allmydata.test.test_observer", "allmydata.test.test_observer",
"allmydata.test.test_pipeline", "allmydata.test.test_pipeline",
"allmydata.test.test_python3", "allmydata.test.test_python3",
"allmydata.test.test_repairer",
"allmydata.test.test_spans", "allmydata.test.test_spans",
"allmydata.test.test_statistics", "allmydata.test.test_statistics",
"allmydata.test.test_storage", "allmydata.test.test_storage",

View File

@ -252,6 +252,16 @@ ESCAPABLE_UNICODE = re.compile(u'([\uD800-\uDBFF][\uDC00-\uDFFF])|' # valid sur
ESCAPABLE_8BIT = re.compile( br'[^ !#\x25-\x5B\x5D-\x5F\x61-\x7E]', re.DOTALL) ESCAPABLE_8BIT = re.compile( br'[^ !#\x25-\x5B\x5D-\x5F\x61-\x7E]', re.DOTALL)
def quote_output_u(*args, **kwargs):
"""
Like ``quote_output`` but always return ``unicode``.
"""
result = quote_output(*args, **kwargs)
if isinstance(result, unicode):
return result
return result.decode(kwargs.get("encoding", None) or io_encoding)
def quote_output(s, quotemarks=True, quote_newlines=None, encoding=None): def quote_output(s, quotemarks=True, quote_newlines=None, encoding=None):
""" """
Encode either a Unicode string or a UTF-8-encoded bytestring for representation Encode either a Unicode string or a UTF-8-encoded bytestring for representation

View File

@ -12,8 +12,6 @@
<h2>General</h2> <h2>General</h2>
<ul> <ul>
<li>Load Average: <t:transparent t:render="load_average" /></li>
<li>Peak Load: <t:transparent t:render="peak_load" /></li>
<li>Files Uploaded (immutable): <t:transparent t:render="uploads" /></li> <li>Files Uploaded (immutable): <t:transparent t:render="uploads" /></li>
<li>Files Downloaded (immutable): <t:transparent t:render="downloads" /></li> <li>Files Downloaded (immutable): <t:transparent t:render="downloads" /></li>
<li>Files Published (mutable): <t:transparent t:render="publishes" /></li> <li>Files Published (mutable): <t:transparent t:render="publishes" /></li>

View File

@ -1565,14 +1565,6 @@ class StatisticsElement(Element):
# Note that `counters` can be empty. # Note that `counters` can be empty.
self._stats = provider.get_stats() self._stats = provider.get_stats()
@renderer
def load_average(self, req, tag):
return tag(str(self._stats["stats"].get("load_monitor.avg_load")))
@renderer
def peak_load(self, req, tag):
return tag(str(self._stats["stats"].get("load_monitor.max_load")))
@renderer @renderer
def uploads(self, req, tag): def uploads(self, req, tag):
files = self._stats["counters"].get("uploader.files_uploaded", 0) files = self._stats["counters"].get("uploader.files_uploaded", 0)