mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-01-19 11:16:24 +00:00
Merge remote-tracking branch 'origin/master' into 3553.nodemaker-python-3
This commit is contained in:
commit
66cd68d325
@ -75,7 +75,7 @@ The item descriptions below use the following types:
|
||||
Node Types
|
||||
==========
|
||||
|
||||
A node can be a client/server, an introducer, or a statistics gatherer.
|
||||
A node can be a client/server or an introducer.
|
||||
|
||||
Client/server nodes provide one or more of the following services:
|
||||
|
||||
@ -593,11 +593,6 @@ Client Configuration
|
||||
If provided, the node will attempt to connect to and use the given helper
|
||||
for uploads. See :doc:`helper` for details.
|
||||
|
||||
``stats_gatherer.furl = (FURL string, optional)``
|
||||
|
||||
If provided, the node will connect to the given stats gatherer and
|
||||
provide it with operational statistics.
|
||||
|
||||
``shares.needed = (int, optional) aka "k", default 3``
|
||||
|
||||
``shares.total = (int, optional) aka "N", N >= k, default 10``
|
||||
@ -911,11 +906,6 @@ This section describes these other files.
|
||||
This file is used to construct an introducer, and is created by the
|
||||
"``tahoe create-introducer``" command.
|
||||
|
||||
``tahoe-stats-gatherer.tac``
|
||||
|
||||
This file is used to construct a statistics gatherer, and is created by the
|
||||
"``tahoe create-stats-gatherer``" command.
|
||||
|
||||
``private/control.furl``
|
||||
|
||||
This file contains a FURL that provides access to a control port on the
|
||||
|
@ -45,9 +45,6 @@ Create a client node (with storage initially disabled).
|
||||
.TP
|
||||
.B \f[B]create-introducer\f[]
|
||||
Create an introducer node.
|
||||
.TP
|
||||
.B \f[B]create-stats-gatherer\f[]
|
||||
Create a stats-gatherer service.
|
||||
.SS OPTIONS
|
||||
.TP
|
||||
.B \f[B]-C,\ --basedir=\f[]
|
||||
|
@ -6,8 +6,7 @@ Tahoe Statistics
|
||||
|
||||
1. `Overview`_
|
||||
2. `Statistics Categories`_
|
||||
3. `Running a Tahoe Stats-Gatherer Service`_
|
||||
4. `Using Munin To Graph Stats Values`_
|
||||
3. `Using Munin To Graph Stats Values`_
|
||||
|
||||
Overview
|
||||
========
|
||||
@ -243,92 +242,6 @@ The currently available stats (as of release 1.6.0 or so) are described here:
|
||||
the process was started. Ticket #472 indicates that .total may
|
||||
sometimes be negative due to wraparound of the kernel's counter.
|
||||
|
||||
**stats.load_monitor.\***
|
||||
|
||||
When enabled, the "load monitor" continually schedules a one-second
|
||||
callback, and measures how late the response is. This estimates system load
|
||||
(if the system is idle, the response should be on time). This is only
|
||||
enabled if a stats-gatherer is configured.
|
||||
|
||||
avg_load
|
||||
average "load" value (seconds late) over the last minute
|
||||
|
||||
max_load
|
||||
maximum "load" value over the last minute
|
||||
|
||||
|
||||
Running a Tahoe Stats-Gatherer Service
|
||||
======================================
|
||||
|
||||
The "stats-gatherer" is a simple daemon that periodically collects stats from
|
||||
several tahoe nodes. It could be useful, e.g., in a production environment,
|
||||
where you want to monitor dozens of storage servers from a central management
|
||||
host. It merely gatherers statistics from many nodes into a single place: it
|
||||
does not do any actual analysis.
|
||||
|
||||
The stats gatherer listens on a network port using the same Foolscap_
|
||||
connection library that Tahoe clients use to connect to storage servers.
|
||||
Tahoe nodes can be configured to connect to the stats gatherer and publish
|
||||
their stats on a periodic basis. (In fact, what happens is that nodes connect
|
||||
to the gatherer and offer it a second FURL which points back to the node's
|
||||
"stats port", which the gatherer then uses to pull stats on a periodic basis.
|
||||
The initial connection is flipped to allow the nodes to live behind NAT
|
||||
boxes, as long as the stats-gatherer has a reachable IP address.)
|
||||
|
||||
.. _Foolscap: https://foolscap.lothar.com/trac
|
||||
|
||||
The stats-gatherer is created in the same fashion as regular tahoe client
|
||||
nodes and introducer nodes. Choose a base directory for the gatherer to live
|
||||
in (but do not create the directory). Choose the hostname that should be
|
||||
advertised in the gatherer's FURL. Then run:
|
||||
|
||||
::
|
||||
|
||||
tahoe create-stats-gatherer --hostname=HOSTNAME $BASEDIR
|
||||
|
||||
and start it with "tahoe start $BASEDIR". Once running, the gatherer will
|
||||
write a FURL into $BASEDIR/stats_gatherer.furl .
|
||||
|
||||
To configure a Tahoe client/server node to contact the stats gatherer, copy
|
||||
this FURL into the node's tahoe.cfg file, in a section named "[client]",
|
||||
under a key named "stats_gatherer.furl", like so:
|
||||
|
||||
::
|
||||
|
||||
[client]
|
||||
stats_gatherer.furl = pb://qbo4ktl667zmtiuou6lwbjryli2brv6t@HOSTNAME:PORTNUM/wxycb4kaexzskubjnauxeoptympyf45y
|
||||
|
||||
or simply copy the stats_gatherer.furl file into the node's base directory
|
||||
(next to the tahoe.cfg file): it will be interpreted in the same way.
|
||||
|
||||
When the gatherer is created, it will allocate a random unused TCP port, so
|
||||
it should not conflict with anything else that you have running on that host
|
||||
at that time. To explicitly control which port it uses, run the creation
|
||||
command with ``--location=`` and ``--port=`` instead of ``--hostname=``. If
|
||||
you use a hostname of ``example.org`` and a port number of ``1234``, then
|
||||
run::
|
||||
|
||||
tahoe create-stats-gatherer --location=tcp:example.org:1234 --port=tcp:1234
|
||||
|
||||
``--location=`` is a Foolscap FURL hints string (so it can be a
|
||||
comma-separated list of connection hints), and ``--port=`` is a Twisted
|
||||
"server endpoint specification string", as described in :doc:`configuration`.
|
||||
|
||||
Once running, the stats gatherer will create a standard JSON file in
|
||||
``$BASEDIR/stats.json``. Once a minute, the gatherer will pull stats
|
||||
information from every connected node and write them into the file. The file
|
||||
will contain a dictionary, in which node identifiers (known as "tubid"
|
||||
strings) are the keys, and the values are a dict with 'timestamp',
|
||||
'nickname', and 'stats' keys. d[tubid][stats] will contain the stats
|
||||
dictionary as made available at http://localhost:3456/statistics?t=json . The
|
||||
file will only contain the most recent update from each node.
|
||||
|
||||
Other tools can be built to examine these stats and render them into
|
||||
something useful. For example, a tool could sum the
|
||||
"storage_server.disk_avail' values from all servers to compute a
|
||||
total-disk-available number for the entire grid (however, the "disk watcher"
|
||||
daemon, in misc/operations_helpers/spacetime/, is better suited for this
|
||||
specific task).
|
||||
|
||||
Using Munin To Graph Stats Values
|
||||
=================================
|
||||
|
0
newsfragments/3522.minor
Normal file
0
newsfragments/3522.minor
Normal file
1
newsfragments/3549.removed
Normal file
1
newsfragments/3549.removed
Normal file
@ -0,0 +1 @@
|
||||
The stats gatherer, broken since at least Tahoe-LAFS 1.13.0, has been removed. The ``[client]stats_gatherer.furl`` configuration item in ``tahoe.cfg`` is no longer allowed. The Tahoe-LAFS project recommends using a third-party metrics aggregation tool instead.
|
0
newsfragments/3551.minor
Normal file
0
newsfragments/3551.minor
Normal file
@ -3,7 +3,6 @@ from past.builtins import unicode
|
||||
import os, stat, time, weakref
|
||||
from base64 import urlsafe_b64encode
|
||||
from functools import partial
|
||||
|
||||
# On Python 2 this will be the backported package:
|
||||
from configparser import NoSectionError
|
||||
|
||||
@ -85,7 +84,6 @@ _client_config = configutil.ValidConfiguration(
|
||||
"shares.happy",
|
||||
"shares.needed",
|
||||
"shares.total",
|
||||
"stats_gatherer.furl",
|
||||
"storage.plugins",
|
||||
),
|
||||
"ftpd": (
|
||||
@ -678,11 +676,7 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
self.init_web(webport) # strports string
|
||||
|
||||
def init_stats_provider(self):
|
||||
gatherer_furl = self.config.get_config("client", "stats_gatherer.furl", None)
|
||||
if gatherer_furl:
|
||||
# FURLs should be bytes:
|
||||
gatherer_furl = gatherer_furl.encode("utf-8")
|
||||
self.stats_provider = StatsProvider(self, gatherer_furl)
|
||||
self.stats_provider = StatsProvider(self)
|
||||
self.stats_provider.setServiceParent(self)
|
||||
self.stats_provider.register_producer(self)
|
||||
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from foolscap.api import DeadReferenceError, RemoteException
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from allmydata.storage.server import si_b2a
|
||||
|
@ -2931,38 +2931,6 @@ class RIHelper(RemoteInterface):
|
||||
return (UploadResults, ChoiceOf(RICHKUploadHelper, None))
|
||||
|
||||
|
||||
class RIStatsProvider(RemoteInterface):
|
||||
__remote_name__ = native_str("RIStatsProvider.tahoe.allmydata.com")
|
||||
"""
|
||||
Provides access to statistics and monitoring information.
|
||||
"""
|
||||
|
||||
def get_stats():
|
||||
"""
|
||||
returns a dictionary containing 'counters' and 'stats', each a
|
||||
dictionary with string counter/stat name keys, and numeric or None values.
|
||||
counters are monotonically increasing measures of work done, and
|
||||
stats are instantaneous measures (potentially time averaged
|
||||
internally)
|
||||
"""
|
||||
return DictOf(bytes, DictOf(bytes, ChoiceOf(float, int, long, None)))
|
||||
|
||||
|
||||
class RIStatsGatherer(RemoteInterface):
|
||||
__remote_name__ = native_str("RIStatsGatherer.tahoe.allmydata.com")
|
||||
"""
|
||||
Provides a monitoring service for centralised collection of stats
|
||||
"""
|
||||
|
||||
def provide(provider=RIStatsProvider, nickname=bytes):
|
||||
"""
|
||||
@param provider: a stats collector instance that should be polled
|
||||
periodically by the gatherer to collect stats.
|
||||
@param nickname: a name useful to identify the provided client
|
||||
"""
|
||||
return None
|
||||
|
||||
|
||||
class IStatsProducer(Interface):
|
||||
def get_stats():
|
||||
"""
|
||||
|
@ -318,7 +318,6 @@ def write_client_config(c, config):
|
||||
|
||||
c.write("[client]\n")
|
||||
c.write("helper.furl =\n")
|
||||
c.write("#stats_gatherer.furl =\n")
|
||||
c.write("\n")
|
||||
c.write("# Encoding parameters this client will use for newly-uploaded files\n")
|
||||
c.write("# This can be changed at any time: the encoding is saved in\n")
|
||||
|
@ -10,7 +10,6 @@ from twisted.application.service import Service
|
||||
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util import fileutil
|
||||
from allmydata.node import read_config
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
|
||||
from allmydata.util.configutil import UnknownConfigError
|
||||
from allmydata.util.deferredutil import HookMixin
|
||||
@ -47,8 +46,8 @@ def get_pid_from_pidfile(pidfile):
|
||||
|
||||
def identify_node_type(basedir):
|
||||
"""
|
||||
:return unicode: None or one of: 'client', 'introducer',
|
||||
'key-generator' or 'stats-gatherer'
|
||||
:return unicode: None or one of: 'client', 'introducer', or
|
||||
'key-generator'
|
||||
"""
|
||||
tac = u''
|
||||
try:
|
||||
@ -59,7 +58,7 @@ def identify_node_type(basedir):
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
for t in (u"client", u"introducer", u"key-generator", u"stats-gatherer"):
|
||||
for t in (u"client", u"introducer", u"key-generator"):
|
||||
if t in tac:
|
||||
return t
|
||||
return None
|
||||
@ -135,7 +134,6 @@ class DaemonizeTheRealService(Service, HookMixin):
|
||||
node_to_instance = {
|
||||
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
|
||||
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
|
||||
u"stats-gatherer": lambda: maybeDeferred(namedAny("allmydata.stats.StatsGathererService"), read_config(self.basedir, None), self.basedir, verbose=True),
|
||||
u"key-generator": key_generator_removed,
|
||||
}
|
||||
|
||||
|
@ -9,7 +9,7 @@ from twisted.internet import defer, task, threads
|
||||
|
||||
from allmydata.scripts.common import get_default_nodedir
|
||||
from allmydata.scripts import debug, create_node, cli, \
|
||||
stats_gatherer, admin, tahoe_daemonize, tahoe_start, \
|
||||
admin, tahoe_daemonize, tahoe_start, \
|
||||
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
|
||||
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
|
||||
from allmydata.util.eliotutil import (
|
||||
@ -60,7 +60,6 @@ class Options(usage.Options):
|
||||
stderr = sys.stderr
|
||||
|
||||
subCommands = ( create_node.subCommands
|
||||
+ stats_gatherer.subCommands
|
||||
+ admin.subCommands
|
||||
+ process_control_commands
|
||||
+ debug.subCommands
|
||||
@ -107,7 +106,7 @@ class Options(usage.Options):
|
||||
|
||||
|
||||
create_dispatch = {}
|
||||
for module in (create_node, stats_gatherer):
|
||||
for module in (create_node,):
|
||||
create_dispatch.update(module.dispatch)
|
||||
|
||||
def parse_options(argv, config=None):
|
||||
|
@ -1,103 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
|
||||
# Python 2 compatibility
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
|
||||
from twisted.python import usage
|
||||
|
||||
from allmydata.scripts.common import NoDefaultBasedirOptions
|
||||
from allmydata.scripts.create_node import write_tac
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_output
|
||||
from allmydata.util import fileutil, iputil
|
||||
|
||||
|
||||
class CreateStatsGathererOptions(NoDefaultBasedirOptions):
|
||||
subcommand_name = "create-stats-gatherer"
|
||||
optParameters = [
|
||||
("hostname", None, None, "Hostname of this machine, used to build location"),
|
||||
("location", None, None, "FURL connection hints, e.g. 'tcp:HOSTNAME:PORT'"),
|
||||
("port", None, None, "listening endpoint, e.g. 'tcp:PORT'"),
|
||||
]
|
||||
def postOptions(self):
|
||||
if self["hostname"] and (not self["location"]) and (not self["port"]):
|
||||
pass
|
||||
elif (not self["hostname"]) and self["location"] and self["port"]:
|
||||
pass
|
||||
else:
|
||||
raise usage.UsageError("You must provide --hostname, or --location and --port.")
|
||||
|
||||
description = """
|
||||
Create a "stats-gatherer" service, which is a standalone process that
|
||||
collects and stores runtime statistics from many server nodes. This is a
|
||||
tool for operations personnel to keep track of free disk space, server
|
||||
load, and protocol activity, across a fleet of Tahoe storage servers.
|
||||
|
||||
The "stats-gatherer" listens on a TCP port and publishes a Foolscap FURL
|
||||
by writing it into a file named "stats_gatherer.furl". You must copy this
|
||||
FURL into the servers' tahoe.cfg, as the [client] stats_gatherer.furl=
|
||||
entry. Those servers will then establish a connection to the
|
||||
stats-gatherer and publish their statistics on a periodic basis. The
|
||||
gatherer writes a summary JSON file out to disk after each update.
|
||||
|
||||
The stats-gatherer listens on a configurable port, and writes a
|
||||
configurable hostname+port pair into the FURL that it publishes. There
|
||||
are two configuration modes you can use.
|
||||
|
||||
* In the first, you provide --hostname=, and the service chooses its own
|
||||
TCP port number. If the host is named "example.org" and you provide
|
||||
--hostname=example.org, the node will pick a port number (e.g. 12345)
|
||||
and use location="tcp:example.org:12345" and port="tcp:12345".
|
||||
|
||||
* In the second, you provide both --location= and --port=, and the
|
||||
service will refrain from doing any allocation of its own. --location=
|
||||
must be a Foolscap "FURL connection hint sequence", which is a
|
||||
comma-separated list of "tcp:HOSTNAME:PORTNUM" strings. --port= must be
|
||||
a Twisted server endpoint specification, which is generally
|
||||
"tcp:PORTNUM". So, if your host is named "example.org" and you want to
|
||||
use port 6789, you should provide --location=tcp:example.org:6789 and
|
||||
--port=tcp:6789. You are responsible for making sure --location= and
|
||||
--port= match each other.
|
||||
"""
|
||||
|
||||
|
||||
def create_stats_gatherer(config):
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
# This should always be called with an absolute Unicode basedir.
|
||||
precondition(isinstance(basedir, str), basedir)
|
||||
|
||||
if os.path.exists(basedir):
|
||||
if listdir_unicode(basedir):
|
||||
print("The base directory %s is not empty." % quote_output(basedir), file=err)
|
||||
print("To avoid clobbering anything, I am going to quit now.", file=err)
|
||||
print("Please use a different directory, or empty this one.", file=err)
|
||||
return -1
|
||||
# we're willing to use an empty directory
|
||||
else:
|
||||
os.mkdir(basedir)
|
||||
write_tac(basedir, "stats-gatherer")
|
||||
if config["hostname"]:
|
||||
portnum = iputil.allocate_tcp_port()
|
||||
location = "tcp:%s:%d" % (config["hostname"], portnum)
|
||||
port = "tcp:%d" % portnum
|
||||
else:
|
||||
location = config["location"]
|
||||
port = config["port"]
|
||||
fileutil.write(os.path.join(basedir, "location"), location+"\n")
|
||||
fileutil.write(os.path.join(basedir, "port"), port+"\n")
|
||||
return 0
|
||||
|
||||
subCommands = [
|
||||
["create-stats-gatherer", None, CreateStatsGathererOptions, "Create a stats-gatherer service."],
|
||||
]
|
||||
|
||||
dispatch = {
|
||||
"create-stats-gatherer": create_stats_gatherer,
|
||||
}
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os.path
|
||||
import codecs
|
||||
@ -10,7 +11,7 @@ from allmydata import uri
|
||||
from allmydata.scripts.common_http import do_http, check_http_error
|
||||
from allmydata.scripts.common import get_aliases
|
||||
from allmydata.util.fileutil import move_into_place
|
||||
from allmydata.util.encodingutil import unicode_to_output, quote_output
|
||||
from allmydata.util.encodingutil import quote_output, quote_output_u
|
||||
|
||||
|
||||
def add_line_to_aliasfile(aliasfile, alias, cap):
|
||||
@ -48,14 +49,13 @@ def add_alias(options):
|
||||
|
||||
old_aliases = get_aliases(nodedir)
|
||||
if alias in old_aliases:
|
||||
print("Alias %s already exists!" % quote_output(alias), file=stderr)
|
||||
show_output(stderr, "Alias {alias} already exists!", alias=alias)
|
||||
return 1
|
||||
aliasfile = os.path.join(nodedir, "private", "aliases")
|
||||
cap = uri.from_string_dirnode(cap).to_string()
|
||||
|
||||
add_line_to_aliasfile(aliasfile, alias, cap)
|
||||
|
||||
print("Alias %s added" % quote_output(alias), file=stdout)
|
||||
show_output(stdout, "Alias {alias} added", alias=alias)
|
||||
return 0
|
||||
|
||||
def create_alias(options):
|
||||
@ -75,7 +75,7 @@ def create_alias(options):
|
||||
|
||||
old_aliases = get_aliases(nodedir)
|
||||
if alias in old_aliases:
|
||||
print("Alias %s already exists!" % quote_output(alias), file=stderr)
|
||||
show_output(stderr, "Alias {alias} already exists!", alias=alias)
|
||||
return 1
|
||||
|
||||
aliasfile = os.path.join(nodedir, "private", "aliases")
|
||||
@ -93,11 +93,51 @@ def create_alias(options):
|
||||
# probably check for others..
|
||||
|
||||
add_line_to_aliasfile(aliasfile, alias, new_uri)
|
||||
|
||||
print("Alias %s created" % (quote_output(alias),), file=stdout)
|
||||
show_output(stdout, "Alias {alias} created", alias=alias)
|
||||
return 0
|
||||
|
||||
|
||||
def show_output(fp, template, **kwargs):
|
||||
"""
|
||||
Print to just about anything.
|
||||
|
||||
:param fp: A file-like object to which to print. This handles the case
|
||||
where ``fp`` declares a support encoding with the ``encoding``
|
||||
attribute (eg sys.stdout on Python 3). It handles the case where
|
||||
``fp`` declares no supported encoding via ``None`` for its
|
||||
``encoding`` attribute (eg sys.stdout on Python 2 when stdout is not a
|
||||
tty). It handles the case where ``fp`` declares an encoding that does
|
||||
not support all of the characters in the output by forcing the
|
||||
"namereplace" error handler. It handles the case where there is no
|
||||
``encoding`` attribute at all (eg StringIO.StringIO) by writing
|
||||
utf-8-encoded bytes.
|
||||
"""
|
||||
assert isinstance(template, unicode)
|
||||
|
||||
# On Python 3 fp has an encoding attribute under all real usage. On
|
||||
# Python 2, the encoding attribute is None if stdio is not a tty. The
|
||||
# test suite often passes StringIO which has no such attribute. Make
|
||||
# allowances for this until the test suite is fixed and Python 2 is no
|
||||
# more.
|
||||
try:
|
||||
encoding = fp.encoding or "utf-8"
|
||||
except AttributeError:
|
||||
has_encoding = False
|
||||
encoding = "utf-8"
|
||||
else:
|
||||
has_encoding = True
|
||||
|
||||
output = template.format(**{
|
||||
k: quote_output_u(v, encoding=encoding)
|
||||
for (k, v)
|
||||
in kwargs.items()
|
||||
})
|
||||
safe_output = output.encode(encoding, "namereplace")
|
||||
if has_encoding:
|
||||
safe_output = safe_output.decode(encoding)
|
||||
print(safe_output, file=fp)
|
||||
|
||||
|
||||
def _get_alias_details(nodedir):
|
||||
aliases = get_aliases(nodedir)
|
||||
alias_names = sorted(aliases.keys())
|
||||
@ -111,34 +151,45 @@ def _get_alias_details(nodedir):
|
||||
return data
|
||||
|
||||
|
||||
def _escape_format(t):
|
||||
"""
|
||||
_escape_format(t).format() == t
|
||||
|
||||
:param unicode t: The text to escape.
|
||||
"""
|
||||
return t.replace("{", "{{").replace("}", "}}")
|
||||
|
||||
|
||||
def list_aliases(options):
|
||||
nodedir = options['node-directory']
|
||||
stdout = options.stdout
|
||||
stderr = options.stderr
|
||||
|
||||
data = _get_alias_details(nodedir)
|
||||
|
||||
max_width = max([len(quote_output(name)) for name in data.keys()] + [0])
|
||||
fmt = "%" + str(max_width) + "s: %s"
|
||||
rc = 0
|
||||
"""
|
||||
Show aliases that exist.
|
||||
"""
|
||||
data = _get_alias_details(options['node-directory'])
|
||||
|
||||
if options['json']:
|
||||
try:
|
||||
# XXX why are we presuming utf-8 output?
|
||||
print(json.dumps(data, indent=4).decode('utf-8'), file=stdout)
|
||||
except (UnicodeEncodeError, UnicodeDecodeError):
|
||||
print(json.dumps(data, indent=4), file=stderr)
|
||||
rc = 1
|
||||
output = _escape_format(json.dumps(data, indent=4).decode("ascii"))
|
||||
else:
|
||||
for name, details in data.items():
|
||||
dircap = details['readonly'] if options['readonly-uri'] else details['readwrite']
|
||||
try:
|
||||
print(fmt % (unicode_to_output(name), unicode_to_output(dircap.decode('utf-8'))), file=stdout)
|
||||
except (UnicodeEncodeError, UnicodeDecodeError):
|
||||
print(fmt % (quote_output(name), quote_output(dircap)), file=stderr)
|
||||
rc = 1
|
||||
def dircap(details):
|
||||
return (
|
||||
details['readonly']
|
||||
if options['readonly-uri']
|
||||
else details['readwrite']
|
||||
).decode("utf-8")
|
||||
|
||||
if rc == 1:
|
||||
print("\nThis listing included aliases or caps that could not be converted to the terminal" \
|
||||
"\noutput encoding. These are shown using backslash escapes and in quotes.", file=stderr)
|
||||
return rc
|
||||
def format_dircap(name, details):
|
||||
return fmt % (name, dircap(details))
|
||||
|
||||
max_width = max([len(quote_output(name)) for name in data.keys()] + [0])
|
||||
fmt = "%" + str(max_width) + "s: %s"
|
||||
output = "\n".join(list(
|
||||
format_dircap(name, details)
|
||||
for name, details
|
||||
in data.items()
|
||||
))
|
||||
|
||||
if output:
|
||||
# Show whatever we computed. Skip this if there is no output to avoid
|
||||
# a spurious blank line.
|
||||
show_output(options.stdout, output)
|
||||
|
||||
return 0
|
||||
|
@ -1,79 +1,19 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import json
|
||||
import os
|
||||
import pprint
|
||||
import time
|
||||
from collections import deque
|
||||
|
||||
# Python 2 compatibility
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
|
||||
from twisted.internet import reactor
|
||||
from twisted.application import service
|
||||
from twisted.application.internet import TimerService
|
||||
from zope.interface import implementer
|
||||
from foolscap.api import eventually, DeadReferenceError, Referenceable, Tub
|
||||
from foolscap.api import eventually
|
||||
|
||||
from allmydata.util import log
|
||||
from allmydata.util.encodingutil import quote_local_unicode_path
|
||||
from allmydata.interfaces import RIStatsProvider, RIStatsGatherer, IStatsProducer
|
||||
|
||||
@implementer(IStatsProducer)
|
||||
class LoadMonitor(service.MultiService):
|
||||
|
||||
loop_interval = 1
|
||||
num_samples = 60
|
||||
|
||||
def __init__(self, provider, warn_if_delay_exceeds=1):
|
||||
service.MultiService.__init__(self)
|
||||
self.provider = provider
|
||||
self.warn_if_delay_exceeds = warn_if_delay_exceeds
|
||||
self.started = False
|
||||
self.last = None
|
||||
self.stats = deque()
|
||||
self.timer = None
|
||||
|
||||
def startService(self):
|
||||
if not self.started:
|
||||
self.started = True
|
||||
self.timer = reactor.callLater(self.loop_interval, self.loop)
|
||||
service.MultiService.startService(self)
|
||||
|
||||
def stopService(self):
|
||||
self.started = False
|
||||
if self.timer:
|
||||
self.timer.cancel()
|
||||
self.timer = None
|
||||
return service.MultiService.stopService(self)
|
||||
|
||||
def loop(self):
|
||||
self.timer = None
|
||||
if not self.started:
|
||||
return
|
||||
now = time.time()
|
||||
if self.last is not None:
|
||||
delay = now - self.last - self.loop_interval
|
||||
if delay > self.warn_if_delay_exceeds:
|
||||
log.msg(format='excessive reactor delay (%ss)', args=(delay,),
|
||||
level=log.UNUSUAL)
|
||||
self.stats.append(delay)
|
||||
while len(self.stats) > self.num_samples:
|
||||
self.stats.popleft()
|
||||
|
||||
self.last = now
|
||||
self.timer = reactor.callLater(self.loop_interval, self.loop)
|
||||
|
||||
def get_stats(self):
|
||||
if self.stats:
|
||||
avg = sum(self.stats) / len(self.stats)
|
||||
m_x = max(self.stats)
|
||||
else:
|
||||
avg = m_x = 0
|
||||
return { 'load_monitor.avg_load': avg,
|
||||
'load_monitor.max_load': m_x, }
|
||||
from allmydata.interfaces import IStatsProducer
|
||||
|
||||
@implementer(IStatsProducer)
|
||||
class CPUUsageMonitor(service.MultiService):
|
||||
@ -128,37 +68,18 @@ class CPUUsageMonitor(service.MultiService):
|
||||
return s
|
||||
|
||||
|
||||
@implementer(RIStatsProvider)
|
||||
class StatsProvider(Referenceable, service.MultiService):
|
||||
class StatsProvider(service.MultiService):
|
||||
|
||||
def __init__(self, node, gatherer_furl):
|
||||
def __init__(self, node):
|
||||
service.MultiService.__init__(self)
|
||||
self.node = node
|
||||
self.gatherer_furl = gatherer_furl # might be None
|
||||
|
||||
self.counters = {}
|
||||
self.stats_producers = []
|
||||
|
||||
# only run the LoadMonitor (which submits a timer every second) if
|
||||
# there is a gatherer who is going to be paying attention. Our stats
|
||||
# are visible through HTTP even without a gatherer, so run the rest
|
||||
# of the stats (including the once-per-minute CPUUsageMonitor)
|
||||
if gatherer_furl:
|
||||
self.load_monitor = LoadMonitor(self)
|
||||
self.load_monitor.setServiceParent(self)
|
||||
self.register_producer(self.load_monitor)
|
||||
|
||||
self.cpu_monitor = CPUUsageMonitor()
|
||||
self.cpu_monitor.setServiceParent(self)
|
||||
self.register_producer(self.cpu_monitor)
|
||||
|
||||
def startService(self):
|
||||
if self.node and self.gatherer_furl:
|
||||
nickname_utf8 = self.node.nickname.encode("utf-8")
|
||||
self.node.tub.connectTo(self.gatherer_furl,
|
||||
self._connected, nickname_utf8)
|
||||
service.MultiService.startService(self)
|
||||
|
||||
def count(self, name, delta=1):
|
||||
if isinstance(name, str):
|
||||
name = name.encode("utf-8")
|
||||
@ -175,155 +96,3 @@ class StatsProvider(Referenceable, service.MultiService):
|
||||
ret = { 'counters': self.counters, 'stats': stats }
|
||||
log.msg(format='get_stats() -> %(stats)s', stats=ret, level=log.NOISY)
|
||||
return ret
|
||||
|
||||
def remote_get_stats(self):
|
||||
# The remote API expects keys to be bytes:
|
||||
def to_bytes(d):
|
||||
result = {}
|
||||
for (k, v) in d.items():
|
||||
if isinstance(k, str):
|
||||
k = k.encode("utf-8")
|
||||
result[k] = v
|
||||
return result
|
||||
|
||||
stats = self.get_stats()
|
||||
return {b"counters": to_bytes(stats["counters"]),
|
||||
b"stats": to_bytes(stats["stats"])}
|
||||
|
||||
def _connected(self, gatherer, nickname):
|
||||
gatherer.callRemoteOnly('provide', self, nickname or '')
|
||||
|
||||
|
||||
@implementer(RIStatsGatherer)
|
||||
class StatsGatherer(Referenceable, service.MultiService):
|
||||
|
||||
poll_interval = 60
|
||||
|
||||
def __init__(self, basedir):
|
||||
service.MultiService.__init__(self)
|
||||
self.basedir = basedir
|
||||
|
||||
self.clients = {}
|
||||
self.nicknames = {}
|
||||
|
||||
self.timer = TimerService(self.poll_interval, self.poll)
|
||||
self.timer.setServiceParent(self)
|
||||
|
||||
def get_tubid(self, rref):
|
||||
return rref.getRemoteTubID()
|
||||
|
||||
def remote_provide(self, provider, nickname):
|
||||
tubid = self.get_tubid(provider)
|
||||
if tubid == '<unauth>':
|
||||
print("WARNING: failed to get tubid for %s (%s)" % (provider, nickname))
|
||||
# don't add to clients to poll (polluting data) don't care about disconnect
|
||||
return
|
||||
self.clients[tubid] = provider
|
||||
self.nicknames[tubid] = nickname
|
||||
|
||||
def poll(self):
|
||||
for tubid,client in self.clients.items():
|
||||
nickname = self.nicknames.get(tubid)
|
||||
d = client.callRemote('get_stats')
|
||||
d.addCallbacks(self.got_stats, self.lost_client,
|
||||
callbackArgs=(tubid, nickname),
|
||||
errbackArgs=(tubid,))
|
||||
d.addErrback(self.log_client_error, tubid)
|
||||
|
||||
def lost_client(self, f, tubid):
|
||||
# this is called lazily, when a get_stats request fails
|
||||
del self.clients[tubid]
|
||||
del self.nicknames[tubid]
|
||||
f.trap(DeadReferenceError)
|
||||
|
||||
def log_client_error(self, f, tubid):
|
||||
log.msg("StatsGatherer: error in get_stats(), peerid=%s" % tubid,
|
||||
level=log.UNUSUAL, failure=f)
|
||||
|
||||
def got_stats(self, stats, tubid, nickname):
|
||||
raise NotImplementedError()
|
||||
|
||||
class StdOutStatsGatherer(StatsGatherer):
|
||||
verbose = True
|
||||
def remote_provide(self, provider, nickname):
|
||||
tubid = self.get_tubid(provider)
|
||||
if self.verbose:
|
||||
print('connect "%s" [%s]' % (nickname, tubid))
|
||||
provider.notifyOnDisconnect(self.announce_lost_client, tubid)
|
||||
StatsGatherer.remote_provide(self, provider, nickname)
|
||||
|
||||
def announce_lost_client(self, tubid):
|
||||
print('disconnect "%s" [%s]' % (self.nicknames[tubid], tubid))
|
||||
|
||||
def got_stats(self, stats, tubid, nickname):
|
||||
print('"%s" [%s]:' % (nickname, tubid))
|
||||
pprint.pprint(stats)
|
||||
|
||||
class JSONStatsGatherer(StdOutStatsGatherer):
|
||||
# inherit from StdOutStatsGatherer for connect/disconnect notifications
|
||||
|
||||
def __init__(self, basedir=u".", verbose=True):
|
||||
self.verbose = verbose
|
||||
StatsGatherer.__init__(self, basedir)
|
||||
self.jsonfile = os.path.join(basedir, "stats.json")
|
||||
|
||||
if os.path.exists(self.jsonfile):
|
||||
try:
|
||||
with open(self.jsonfile, 'rb') as f:
|
||||
self.gathered_stats = json.load(f)
|
||||
except Exception:
|
||||
print("Error while attempting to load stats file %s.\n"
|
||||
"You may need to restore this file from a backup,"
|
||||
" or delete it if no backup is available.\n" %
|
||||
quote_local_unicode_path(self.jsonfile))
|
||||
raise
|
||||
else:
|
||||
self.gathered_stats = {}
|
||||
|
||||
def got_stats(self, stats, tubid, nickname):
|
||||
s = self.gathered_stats.setdefault(tubid, {})
|
||||
s['timestamp'] = time.time()
|
||||
s['nickname'] = nickname
|
||||
s['stats'] = stats
|
||||
self.dump_json()
|
||||
|
||||
def dump_json(self):
|
||||
tmp = "%s.tmp" % (self.jsonfile,)
|
||||
with open(tmp, 'wb') as f:
|
||||
json.dump(self.gathered_stats, f)
|
||||
if os.path.exists(self.jsonfile):
|
||||
os.unlink(self.jsonfile)
|
||||
os.rename(tmp, self.jsonfile)
|
||||
|
||||
class StatsGathererService(service.MultiService):
|
||||
furl_file = "stats_gatherer.furl"
|
||||
|
||||
def __init__(self, basedir=".", verbose=False):
|
||||
service.MultiService.__init__(self)
|
||||
self.basedir = basedir
|
||||
self.tub = Tub(certFile=os.path.join(self.basedir,
|
||||
"stats_gatherer.pem"))
|
||||
self.tub.setServiceParent(self)
|
||||
self.tub.setOption("logLocalFailures", True)
|
||||
self.tub.setOption("logRemoteFailures", True)
|
||||
self.tub.setOption("expose-remote-exception-types", False)
|
||||
|
||||
self.stats_gatherer = JSONStatsGatherer(self.basedir, verbose)
|
||||
self.stats_gatherer.setServiceParent(self)
|
||||
|
||||
try:
|
||||
with open(os.path.join(self.basedir, "location")) as f:
|
||||
location = f.read().strip()
|
||||
except EnvironmentError:
|
||||
raise ValueError("Unable to find 'location' in BASEDIR, please rebuild your stats-gatherer")
|
||||
try:
|
||||
with open(os.path.join(self.basedir, "port")) as f:
|
||||
port = f.read().strip()
|
||||
except EnvironmentError:
|
||||
raise ValueError("Unable to find 'port' in BASEDIR, please rebuild your stats-gatherer")
|
||||
|
||||
self.tub.listenOn(port)
|
||||
self.tub.setLocation(location)
|
||||
ff = os.path.join(self.basedir, self.furl_file)
|
||||
self.gatherer_furl = self.tub.registerReference(self.stats_gatherer,
|
||||
furlFile=ff)
|
||||
|
@ -1,6 +1,6 @@
|
||||
from ...util.encodingutil import unicode_to_argv
|
||||
from ...scripts import runner
|
||||
from ..common_util import ReallyEqualMixin, run_cli
|
||||
from ..common_util import ReallyEqualMixin, run_cli, run_cli_unicode
|
||||
|
||||
def parse_options(basedir, command, args):
|
||||
o = runner.Options()
|
||||
@ -10,10 +10,41 @@ def parse_options(basedir, command, args):
|
||||
return o
|
||||
|
||||
class CLITestMixin(ReallyEqualMixin):
|
||||
def do_cli(self, verb, *args, **kwargs):
|
||||
"""
|
||||
A mixin for use with ``GridTestMixin`` to execute CLI commands against
|
||||
nodes created by methods of that mixin.
|
||||
"""
|
||||
def do_cli_unicode(self, verb, argv, client_num=0, **kwargs):
|
||||
"""
|
||||
Run a Tahoe-LAFS CLI command.
|
||||
|
||||
:param verb: See ``run_cli_unicode``.
|
||||
|
||||
:param argv: See ``run_cli_unicode``.
|
||||
|
||||
:param int client_num: The number of the ``GridTestMixin``-created
|
||||
node against which to execute the command.
|
||||
|
||||
:param kwargs: Additional keyword arguments to pass to
|
||||
``run_cli_unicode``.
|
||||
"""
|
||||
# client_num is used to execute client CLI commands on a specific
|
||||
# client.
|
||||
client_num = kwargs.get("client_num", 0)
|
||||
client_dir = self.get_clientdir(i=client_num)
|
||||
nodeargs = [ u"--node-directory", client_dir ]
|
||||
return run_cli_unicode(verb, argv, nodeargs=nodeargs, **kwargs)
|
||||
|
||||
|
||||
def do_cli(self, verb, *args, **kwargs):
|
||||
"""
|
||||
Like ``do_cli_unicode`` but work with ``bytes`` everywhere instead of
|
||||
``unicode``.
|
||||
|
||||
Where possible, prefer ``do_cli_unicode``.
|
||||
"""
|
||||
# client_num is used to execute client CLI commands on a specific
|
||||
# client.
|
||||
client_num = kwargs.pop("client_num", 0)
|
||||
client_dir = unicode_to_argv(self.get_clientdir(i=client_num))
|
||||
nodeargs = [ "--node-directory", client_dir ]
|
||||
return run_cli(verb, nodeargs=nodeargs, *args, **kwargs)
|
||||
nodeargs = [ b"--node-directory", client_dir ]
|
||||
return run_cli(verb, *args, nodeargs=nodeargs, **kwargs)
|
||||
|
@ -1,105 +1,126 @@
|
||||
import json
|
||||
from mock import patch
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.internet.defer import inlineCallbacks
|
||||
|
||||
from allmydata.util.encodingutil import unicode_to_argv
|
||||
from allmydata.scripts.common import get_aliases
|
||||
from allmydata.test.no_network import GridTestMixin
|
||||
from .common import CLITestMixin
|
||||
from ..common_util import skip_if_cannot_represent_argv
|
||||
from allmydata.util import encodingutil
|
||||
|
||||
# see also test_create_alias
|
||||
|
||||
class ListAlias(GridTestMixin, CLITestMixin, unittest.TestCase):
|
||||
|
||||
@inlineCallbacks
|
||||
def test_list(self):
|
||||
self.basedir = "cli/ListAlias/test_list"
|
||||
def _check_create_alias(self, alias, encoding):
|
||||
"""
|
||||
Verify that ``tahoe create-alias`` can be used to create an alias named
|
||||
``alias`` when argv is encoded using ``encoding``.
|
||||
|
||||
:param unicode alias: The alias to try to create.
|
||||
|
||||
:param NoneType|str encoding: The name of an encoding to force the
|
||||
``create-alias`` implementation to use. This simulates the
|
||||
effects of setting LANG and doing other locale-foolishness without
|
||||
actually having to mess with this process's global locale state.
|
||||
If this is ``None`` then the encoding used will be ascii but the
|
||||
stdio objects given to the code under test will not declare any
|
||||
encoding (this is like Python 2 when stdio is not a tty).
|
||||
|
||||
:return Deferred: A Deferred that fires with success if the alias can
|
||||
be created and that creation is reported on stdout appropriately
|
||||
encoded or with failure if something goes wrong.
|
||||
"""
|
||||
self.basedir = self.mktemp()
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"create-alias",
|
||||
unicode_to_argv(u"tahoe"),
|
||||
# We can pass an encoding into the test utilities to invoke the code
|
||||
# under test but we can't pass such a parameter directly to the code
|
||||
# under test. Instead, that code looks at io_encoding. So,
|
||||
# monkey-patch that value to our desired value here. This is the code
|
||||
# that most directly takes the place of messing with LANG or the
|
||||
# locale module.
|
||||
self.patch(encodingutil, "io_encoding", encoding or "ascii")
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli_unicode(
|
||||
u"create-alias",
|
||||
[alias],
|
||||
encoding=encoding,
|
||||
)
|
||||
|
||||
self.failUnless(unicode_to_argv(u"Alias 'tahoe' created") in stdout)
|
||||
self.failIf(stderr)
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.failUnless(u"tahoe" in aliases)
|
||||
self.failUnless(aliases[u"tahoe"].startswith("URI:DIR2:"))
|
||||
# Make sure the result of the create-alias command is as we want it to
|
||||
# be.
|
||||
self.assertEqual(u"Alias '{}' created\n".format(alias), stdout)
|
||||
self.assertEqual("", stderr)
|
||||
self.assertEqual(0, rc)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli("list-aliases", "--json")
|
||||
# Make sure it had the intended side-effect, too - an alias created in
|
||||
# the node filesystem state.
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.assertIn(alias, aliases)
|
||||
self.assertTrue(aliases[alias].startswith(u"URI:DIR2:"))
|
||||
|
||||
# And inspect the state via the user interface list-aliases command
|
||||
# too.
|
||||
rc, stdout, stderr = yield self.do_cli_unicode(
|
||||
u"list-aliases",
|
||||
[u"--json"],
|
||||
encoding=encoding,
|
||||
)
|
||||
|
||||
self.assertEqual(0, rc)
|
||||
data = json.loads(stdout)
|
||||
self.assertIn(u"tahoe", data)
|
||||
data = data[u"tahoe"]
|
||||
self.assertIn("readwrite", data)
|
||||
self.assertIn("readonly", data)
|
||||
self.assertIn(alias, data)
|
||||
data = data[alias]
|
||||
self.assertIn(u"readwrite", data)
|
||||
self.assertIn(u"readonly", data)
|
||||
|
||||
@inlineCallbacks
|
||||
def test_list_unicode_mismatch_json(self):
|
||||
"""
|
||||
pretty hack-y test, but we want to cover the 'except' on Unicode
|
||||
errors paths and I can't come up with a nicer way to trigger
|
||||
this
|
||||
"""
|
||||
self.basedir = "cli/ListAlias/test_list_unicode_mismatch_json"
|
||||
skip_if_cannot_represent_argv(u"tahoe\u263A")
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"create-alias",
|
||||
unicode_to_argv(u"tahoe\u263A"),
|
||||
def test_list_none(self):
|
||||
"""
|
||||
An alias composed of all ASCII-encodeable code points can be created when
|
||||
stdio aren't clearly marked with an encoding.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"tahoe",
|
||||
encoding=None,
|
||||
)
|
||||
|
||||
self.failUnless(unicode_to_argv(u"Alias 'tahoe\u263A' created") in stdout)
|
||||
self.failIf(stderr)
|
||||
|
||||
booms = []
|
||||
|
||||
def boom(out, indent=4):
|
||||
if not len(booms):
|
||||
booms.append(out)
|
||||
raise UnicodeEncodeError("foo", u"foo", 3, 5, "foo")
|
||||
return str(out)
|
||||
|
||||
with patch("allmydata.scripts.tahoe_add_alias.json.dumps", boom):
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.failUnless(u"tahoe\u263A" in aliases)
|
||||
self.failUnless(aliases[u"tahoe\u263A"].startswith("URI:DIR2:"))
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli("list-aliases", "--json")
|
||||
|
||||
self.assertEqual(1, rc)
|
||||
self.assertIn("could not be converted", stderr)
|
||||
|
||||
@inlineCallbacks
|
||||
def test_list_unicode_mismatch(self):
|
||||
self.basedir = "cli/ListAlias/test_list_unicode_mismatch"
|
||||
skip_if_cannot_represent_argv(u"tahoe\u263A")
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli(
|
||||
"create-alias",
|
||||
unicode_to_argv(u"tahoe\u263A"),
|
||||
def test_list_ascii(self):
|
||||
"""
|
||||
An alias composed of all ASCII-encodeable code points can be created when
|
||||
the active encoding is ASCII.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"tahoe",
|
||||
encoding="ascii",
|
||||
)
|
||||
|
||||
def boom(out):
|
||||
print("boom {}".format(out))
|
||||
return out
|
||||
raise UnicodeEncodeError("foo", u"foo", 3, 5, "foo")
|
||||
|
||||
with patch("allmydata.scripts.tahoe_add_alias.unicode_to_output", boom):
|
||||
self.failUnless(unicode_to_argv(u"Alias 'tahoe\u263A' created") in stdout)
|
||||
self.failIf(stderr)
|
||||
aliases = get_aliases(self.get_clientdir())
|
||||
self.failUnless(u"tahoe\u263A" in aliases)
|
||||
self.failUnless(aliases[u"tahoe\u263A"].startswith("URI:DIR2:"))
|
||||
def test_list_latin_1(self):
|
||||
"""
|
||||
An alias composed of all Latin-1-encodeable code points can be created
|
||||
when the active encoding is Latin-1.
|
||||
|
||||
rc, stdout, stderr = yield self.do_cli("list-aliases")
|
||||
This is very similar to ``test_list_utf_8`` but the assumption of
|
||||
UTF-8 is nearly ubiquitous and explicitly exercising the codepaths
|
||||
with a UTF-8-incompatible encoding helps flush out unintentional UTF-8
|
||||
assumptions.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"taho\N{LATIN SMALL LETTER E WITH ACUTE}",
|
||||
encoding="latin-1",
|
||||
)
|
||||
|
||||
self.assertEqual(1, rc)
|
||||
self.assertIn("could not be converted", stderr)
|
||||
|
||||
def test_list_utf_8(self):
|
||||
"""
|
||||
An alias composed of all UTF-8-encodeable code points can be created when
|
||||
the active encoding is UTF-8.
|
||||
"""
|
||||
return self._check_create_alias(
|
||||
u"tahoe\N{SNOWMAN}",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
@ -661,7 +661,7 @@ starting copy, 2 files, 1 directories
|
||||
# This test ensures that tahoe will copy a file from the grid to
|
||||
# a local directory without a specified file name.
|
||||
# https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2027
|
||||
self.basedir = "cli/Cp/cp_verbose"
|
||||
self.basedir = "cli/Cp/ticket_2027"
|
||||
self.set_up_grid(oneshare=True)
|
||||
|
||||
# Write a test file, which we'll copy to the grid.
|
||||
|
@ -11,6 +11,8 @@ __all__ = [
|
||||
"skipIf",
|
||||
]
|
||||
|
||||
from past.builtins import chr as byteschr
|
||||
|
||||
import os, random, struct
|
||||
import six
|
||||
import tempfile
|
||||
@ -1057,7 +1059,7 @@ def _corrupt_share_data_last_byte(data, debug=False):
|
||||
sharedatasize = struct.unpack(">Q", data[0x0c+0x08:0x0c+0x0c+8])[0]
|
||||
offset = 0x0c+0x44+sharedatasize-1
|
||||
|
||||
newdata = data[:offset] + chr(ord(data[offset])^0xFF) + data[offset+1:]
|
||||
newdata = data[:offset] + byteschr(ord(data[offset:offset+1])^0xFF) + data[offset+1:]
|
||||
if debug:
|
||||
log.msg("testing: flipping all bits of byte at offset %d: %r, newdata: %r" % (offset, data[offset], newdata[offset]))
|
||||
return newdata
|
||||
@ -1085,7 +1087,7 @@ def _corrupt_crypttext_hash_tree_byte_x221(data, debug=False):
|
||||
assert sharevernum in (1, 2), "This test is designed to corrupt immutable shares of v1 or v2 in specific ways."
|
||||
if debug:
|
||||
log.msg("original data: %r" % (data,))
|
||||
return data[:0x0c+0x221] + chr(ord(data[0x0c+0x221])^0x02) + data[0x0c+0x2210+1:]
|
||||
return data[:0x0c+0x221] + byteschr(ord(data[0x0c+0x221:0x0c+0x221+1])^0x02) + data[0x0c+0x2210+1:]
|
||||
|
||||
def _corrupt_block_hashes(data, debug=False):
|
||||
"""Scramble the file data -- the field containing the block hash tree
|
||||
|
@ -5,6 +5,10 @@ import time
|
||||
import signal
|
||||
from random import randrange
|
||||
from six.moves import StringIO
|
||||
from io import (
|
||||
TextIOWrapper,
|
||||
BytesIO,
|
||||
)
|
||||
|
||||
from twisted.internet import reactor, defer
|
||||
from twisted.python import failure
|
||||
@ -35,27 +39,131 @@ def skip_if_cannot_represent_argv(u):
|
||||
except UnicodeEncodeError:
|
||||
raise unittest.SkipTest("A non-ASCII argv could not be encoded on this platform.")
|
||||
|
||||
def run_cli(verb, *args, **kwargs):
|
||||
precondition(not [True for arg in args if not isinstance(arg, str)],
|
||||
"arguments to do_cli must be strs -- convert using unicode_to_argv", args=args)
|
||||
nodeargs = kwargs.get("nodeargs", [])
|
||||
|
||||
def _getvalue(io):
|
||||
"""
|
||||
Read out the complete contents of a file-like object.
|
||||
"""
|
||||
io.seek(0)
|
||||
return io.read()
|
||||
|
||||
|
||||
def run_cli_bytes(verb, *args, **kwargs):
|
||||
"""
|
||||
Run a Tahoe-LAFS CLI command specified as bytes.
|
||||
|
||||
Most code should prefer ``run_cli_unicode`` which deals with all the
|
||||
necessary encoding considerations. This helper still exists so that novel
|
||||
misconfigurations can be explicitly tested (for example, receiving UTF-8
|
||||
bytes when the system encoding claims to be ASCII).
|
||||
|
||||
:param bytes verb: The command to run. For example, ``b"create-node"``.
|
||||
|
||||
:param [bytes] args: The arguments to pass to the command. For example,
|
||||
``(b"--hostname=localhost",)``.
|
||||
|
||||
:param [bytes] nodeargs: Extra arguments to pass to the Tahoe executable
|
||||
before ``verb``.
|
||||
|
||||
:param bytes stdin: Text to pass to the command via stdin.
|
||||
|
||||
:param NoneType|str encoding: The name of an encoding which stdout and
|
||||
stderr will be configured to use. ``None`` means stdout and stderr
|
||||
will accept bytes and unicode and use the default system encoding for
|
||||
translating between them.
|
||||
"""
|
||||
nodeargs = kwargs.pop("nodeargs", [])
|
||||
encoding = kwargs.pop("encoding", None)
|
||||
precondition(
|
||||
all(isinstance(arg, bytes) for arg in [verb] + nodeargs + list(args)),
|
||||
"arguments to run_cli must be bytes -- convert using unicode_to_argv",
|
||||
verb=verb,
|
||||
args=args,
|
||||
nodeargs=nodeargs,
|
||||
)
|
||||
argv = nodeargs + [verb] + list(args)
|
||||
stdin = kwargs.get("stdin", "")
|
||||
stdout = StringIO()
|
||||
stderr = StringIO()
|
||||
if encoding is None:
|
||||
# The original behavior, the Python 2 behavior, is to accept either
|
||||
# bytes or unicode and try to automatically encode or decode as
|
||||
# necessary. This works okay for ASCII and if LANG is set
|
||||
# appropriately. These aren't great constraints so we should move
|
||||
# away from this behavior.
|
||||
stdout = StringIO()
|
||||
stderr = StringIO()
|
||||
else:
|
||||
# The new behavior, the Python 3 behavior, is to accept unicode and
|
||||
# encode it using a specific encoding. For older versions of Python
|
||||
# 3, the encoding is determined from LANG (bad) but for newer Python
|
||||
# 3, the encoding is always utf-8 (good). Tests can pass in different
|
||||
# encodings to exercise different behaviors.
|
||||
stdout = TextIOWrapper(BytesIO(), encoding)
|
||||
stderr = TextIOWrapper(BytesIO(), encoding)
|
||||
d = defer.succeed(argv)
|
||||
d.addCallback(runner.parse_or_exit_with_explanation, stdout=stdout)
|
||||
d.addCallback(runner.dispatch,
|
||||
stdin=StringIO(stdin),
|
||||
stdout=stdout, stderr=stderr)
|
||||
def _done(rc):
|
||||
return 0, stdout.getvalue(), stderr.getvalue()
|
||||
return 0, _getvalue(stdout), _getvalue(stderr)
|
||||
def _err(f):
|
||||
f.trap(SystemExit)
|
||||
return f.value.code, stdout.getvalue(), stderr.getvalue()
|
||||
return f.value.code, _getvalue(stdout), _getvalue(stderr)
|
||||
d.addCallbacks(_done, _err)
|
||||
return d
|
||||
|
||||
|
||||
def run_cli_unicode(verb, argv, nodeargs=None, stdin=None, encoding=None):
|
||||
"""
|
||||
Run a Tahoe-LAFS CLI command.
|
||||
|
||||
:param unicode verb: The command to run. For example, ``u"create-node"``.
|
||||
|
||||
:param [unicode] argv: The arguments to pass to the command. For example,
|
||||
``[u"--hostname=localhost"]``.
|
||||
|
||||
:param [unicode] nodeargs: Extra arguments to pass to the Tahoe executable
|
||||
before ``verb``.
|
||||
|
||||
:param unicode stdin: Text to pass to the command via stdin.
|
||||
|
||||
:param NoneType|str encoding: The name of an encoding to use for all
|
||||
bytes/unicode conversions necessary *and* the encoding to cause stdio
|
||||
to declare with its ``encoding`` attribute. ``None`` means ASCII will
|
||||
be used and no declaration will be made at all.
|
||||
"""
|
||||
if nodeargs is None:
|
||||
nodeargs = []
|
||||
precondition(
|
||||
all(isinstance(arg, unicode) for arg in [verb] + nodeargs + argv),
|
||||
"arguments to run_cli_unicode must be unicode",
|
||||
verb=verb,
|
||||
nodeargs=nodeargs,
|
||||
argv=argv,
|
||||
)
|
||||
codec = encoding or "ascii"
|
||||
encode = lambda t: None if t is None else t.encode(codec)
|
||||
d = run_cli_bytes(
|
||||
encode(verb),
|
||||
nodeargs=list(encode(arg) for arg in nodeargs),
|
||||
stdin=encode(stdin),
|
||||
encoding=encoding,
|
||||
*list(encode(arg) for arg in argv)
|
||||
)
|
||||
def maybe_decode(result):
|
||||
code, stdout, stderr = result
|
||||
if isinstance(stdout, bytes):
|
||||
stdout = stdout.decode(codec)
|
||||
if isinstance(stderr, bytes):
|
||||
stderr = stderr.decode(codec)
|
||||
return code, stdout, stderr
|
||||
d.addCallback(maybe_decode)
|
||||
return d
|
||||
|
||||
|
||||
run_cli = run_cli_bytes
|
||||
|
||||
|
||||
def parse_cli(*argv):
|
||||
# This parses the CLI options (synchronously), and returns the Options
|
||||
# argument, or throws usage.UsageError if something went wrong.
|
||||
|
@ -1,5 +1,15 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import print_function
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from allmydata.test import common
|
||||
from allmydata.monitor import Monitor
|
||||
@ -62,7 +72,7 @@ class RepairTestMixin(object):
|
||||
c0 = self.g.clients[0]
|
||||
c1 = self.g.clients[1]
|
||||
c0.encoding_params['max_segment_size'] = 12
|
||||
d = c0.upload(upload.Data(common.TEST_DATA, convergence=""))
|
||||
d = c0.upload(upload.Data(common.TEST_DATA, convergence=b""))
|
||||
def _stash_uri(ur):
|
||||
self.uri = ur.get_uri()
|
||||
self.c0_filenode = c0.create_node_from_uri(ur.get_uri())
|
||||
@ -464,7 +474,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
|
||||
# previously-deleted share #2.
|
||||
|
||||
d.addCallback(lambda ignored:
|
||||
self.delete_shares_numbered(self.uri, range(3, 10+1)))
|
||||
self.delete_shares_numbered(self.uri, list(range(3, 10+1))))
|
||||
d.addCallback(lambda ignored: download_to_data(self.c1_filenode))
|
||||
d.addCallback(lambda newdata:
|
||||
self.failUnlessEqual(newdata, common.TEST_DATA))
|
||||
@ -476,7 +486,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
|
||||
self.set_up_grid(num_clients=2)
|
||||
d = self.upload_and_stash()
|
||||
d.addCallback(lambda ignored:
|
||||
self.delete_shares_numbered(self.uri, range(7)))
|
||||
self.delete_shares_numbered(self.uri, list(range(7))))
|
||||
d.addCallback(lambda ignored: self._stash_counts())
|
||||
d.addCallback(lambda ignored:
|
||||
self.c0_filenode.check_and_repair(Monitor(),
|
||||
@ -509,7 +519,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
|
||||
# previously-deleted share #2.
|
||||
|
||||
d.addCallback(lambda ignored:
|
||||
self.delete_shares_numbered(self.uri, range(3, 10+1)))
|
||||
self.delete_shares_numbered(self.uri, list(range(3, 10+1))))
|
||||
d.addCallback(lambda ignored: download_to_data(self.c1_filenode))
|
||||
d.addCallback(lambda newdata:
|
||||
self.failUnlessEqual(newdata, common.TEST_DATA))
|
||||
@ -527,7 +537,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
|
||||
# distributing the shares widely enough to satisfy the default
|
||||
# happiness setting.
|
||||
def _delete_some_servers(ignored):
|
||||
for i in xrange(7):
|
||||
for i in range(7):
|
||||
self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
|
||||
|
||||
assert len(self.g.servers_by_number) == 3
|
||||
@ -640,7 +650,7 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
|
||||
# downloading and has the right contents. This can't work
|
||||
# unless it has already repaired the previously-corrupted share.
|
||||
def _then_delete_7_and_try_a_download(unused=None):
|
||||
shnums = range(10)
|
||||
shnums = list(range(10))
|
||||
shnums.remove(shnum)
|
||||
random.shuffle(shnums)
|
||||
for sharenum in shnums[:7]:
|
||||
@ -679,10 +689,10 @@ class Repairer(GridTestMixin, unittest.TestCase, RepairTestMixin,
|
||||
self.basedir = "repairer/Repairer/test_tiny_reads"
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
DATA = "a"*135
|
||||
DATA = b"a"*135
|
||||
c0.encoding_params['k'] = 22
|
||||
c0.encoding_params['n'] = 66
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _then(ur):
|
||||
self.uri = ur.get_uri()
|
||||
self.delete_shares_numbered(self.uri, [0])
|
||||
|
@ -142,9 +142,8 @@ class BinTahoe(common_util.SignalMixin, unittest.TestCase, RunBinTahoeMixin):
|
||||
|
||||
|
||||
class CreateNode(unittest.TestCase):
|
||||
# exercise "tahoe create-node", create-introducer,
|
||||
# create-key-generator, and create-stats-gatherer, by calling the
|
||||
# corresponding code as a subroutine.
|
||||
# exercise "tahoe create-node", create-introducer, and
|
||||
# create-key-generator by calling the corresponding code as a subroutine.
|
||||
|
||||
def workdir(self, name):
|
||||
basedir = os.path.join("test_runner", "CreateNode", name)
|
||||
@ -243,48 +242,11 @@ class CreateNode(unittest.TestCase):
|
||||
def test_introducer(self):
|
||||
self.do_create("introducer", "--hostname=127.0.0.1")
|
||||
|
||||
def test_stats_gatherer(self):
|
||||
self.do_create("stats-gatherer", "--hostname=127.0.0.1")
|
||||
|
||||
def test_subcommands(self):
|
||||
# no arguments should trigger a command listing, via UsageError
|
||||
self.failUnlessRaises(usage.UsageError, parse_cli,
|
||||
)
|
||||
|
||||
@inlineCallbacks
|
||||
def test_stats_gatherer_good_args(self):
|
||||
rc,out,err = yield run_cli("create-stats-gatherer", "--hostname=foo",
|
||||
self.mktemp())
|
||||
self.assertEqual(rc, 0)
|
||||
rc,out,err = yield run_cli("create-stats-gatherer",
|
||||
"--location=tcp:foo:1234",
|
||||
"--port=tcp:1234", self.mktemp())
|
||||
self.assertEqual(rc, 0)
|
||||
|
||||
|
||||
def test_stats_gatherer_bad_args(self):
|
||||
def _test(args):
|
||||
argv = args.split()
|
||||
self.assertRaises(usage.UsageError, parse_cli, *argv)
|
||||
|
||||
# missing hostname/location/port
|
||||
_test("create-stats-gatherer D")
|
||||
|
||||
# missing port
|
||||
_test("create-stats-gatherer --location=foo D")
|
||||
|
||||
# missing location
|
||||
_test("create-stats-gatherer --port=foo D")
|
||||
|
||||
# can't provide both
|
||||
_test("create-stats-gatherer --hostname=foo --port=foo D")
|
||||
|
||||
# can't provide both
|
||||
_test("create-stats-gatherer --hostname=foo --location=foo D")
|
||||
|
||||
# can't provide all three
|
||||
_test("create-stats-gatherer --hostname=foo --location=foo --port=foo D")
|
||||
|
||||
|
||||
class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
|
||||
RunBinTahoeMixin):
|
||||
|
@ -23,7 +23,6 @@ from allmydata.util import log, base32
|
||||
from allmydata.util.encodingutil import quote_output, unicode_to_argv
|
||||
from allmydata.util.fileutil import abspath_expanduser_unicode
|
||||
from allmydata.util.consumer import MemoryConsumer, download_to_data
|
||||
from allmydata.stats import StatsGathererService
|
||||
from allmydata.interfaces import IDirectoryNode, IFileNode, \
|
||||
NoSuchChildError, NoSharesError
|
||||
from allmydata.monitor import Monitor
|
||||
@ -667,9 +666,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
self.sparent = service.MultiService()
|
||||
self.sparent.startService()
|
||||
|
||||
self.stats_gatherer = None
|
||||
self.stats_gatherer_furl = None
|
||||
|
||||
def tearDown(self):
|
||||
log.msg("shutting down SystemTest services")
|
||||
d = self.sparent.stopService()
|
||||
@ -713,7 +709,7 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
return f.read().strip()
|
||||
|
||||
@inlineCallbacks
|
||||
def set_up_nodes(self, NUMCLIENTS=5, use_stats_gatherer=False):
|
||||
def set_up_nodes(self, NUMCLIENTS=5):
|
||||
"""
|
||||
Create an introducer and ``NUMCLIENTS`` client nodes pointed at it. All
|
||||
of the nodes are running in this process.
|
||||
@ -726,9 +722,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
|
||||
:param int NUMCLIENTS: The number of client nodes to create.
|
||||
|
||||
:param bool use_stats_gatherer: If ``True`` then also create a stats
|
||||
gatherer and configure the other nodes to use it.
|
||||
|
||||
:return: A ``Deferred`` that fires when the nodes have connected to
|
||||
each other.
|
||||
"""
|
||||
@ -737,33 +730,7 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
self.introducer = yield self._create_introducer()
|
||||
self.add_service(self.introducer)
|
||||
self.introweb_url = self._get_introducer_web()
|
||||
|
||||
if use_stats_gatherer:
|
||||
yield self._set_up_stats_gatherer()
|
||||
yield self._set_up_client_nodes()
|
||||
if use_stats_gatherer:
|
||||
yield self._grab_stats()
|
||||
|
||||
def _set_up_stats_gatherer(self):
|
||||
statsdir = self.getdir("stats_gatherer")
|
||||
fileutil.make_dirs(statsdir)
|
||||
|
||||
location_hint, port_endpoint = self.port_assigner.assign(reactor)
|
||||
fileutil.write(os.path.join(statsdir, "location"), location_hint)
|
||||
fileutil.write(os.path.join(statsdir, "port"), port_endpoint)
|
||||
self.stats_gatherer_svc = StatsGathererService(statsdir)
|
||||
self.stats_gatherer = self.stats_gatherer_svc.stats_gatherer
|
||||
self.stats_gatherer_svc.setServiceParent(self.sparent)
|
||||
|
||||
d = fireEventually()
|
||||
sgf = os.path.join(statsdir, 'stats_gatherer.furl')
|
||||
def check_for_furl():
|
||||
return os.path.exists(sgf)
|
||||
d.addCallback(lambda junk: self.poll(check_for_furl, timeout=30))
|
||||
def get_furl(junk):
|
||||
self.stats_gatherer_furl = file(sgf, 'rb').read().strip()
|
||||
d.addCallback(get_furl)
|
||||
return d
|
||||
|
||||
@inlineCallbacks
|
||||
def _set_up_client_nodes(self):
|
||||
@ -833,15 +800,11 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
value = value.encode("utf-8")
|
||||
config.setdefault(section, {})[feature] = value
|
||||
|
||||
setclient = partial(setconf, config, which, "client")
|
||||
setnode = partial(setconf, config, which, "node")
|
||||
sethelper = partial(setconf, config, which, "helper")
|
||||
|
||||
setnode("nickname", u"client %d \N{BLACK SMILING FACE}" % (which,))
|
||||
|
||||
if self.stats_gatherer_furl:
|
||||
setclient("stats_gatherer.furl", self.stats_gatherer_furl)
|
||||
|
||||
tub_location_hint, tub_port_endpoint = self.port_assigner.assign(reactor)
|
||||
setnode("tub.port", tub_port_endpoint)
|
||||
setnode("tub.location", tub_location_hint)
|
||||
@ -872,10 +835,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
fileutil.write(os.path.join(basedir, 'tahoe.cfg'), config)
|
||||
return basedir
|
||||
|
||||
def _grab_stats(self):
|
||||
d = self.stats_gatherer.poll()
|
||||
return d
|
||||
|
||||
def bounce_client(self, num):
|
||||
c = self.clients[num]
|
||||
d = c.disownServiceParent()
|
||||
@ -1303,25 +1262,11 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
d.addCallback(_upload_resumable)
|
||||
|
||||
def _grab_stats(ignored):
|
||||
# the StatsProvider doesn't normally publish a FURL:
|
||||
# instead it passes a live reference to the StatsGatherer
|
||||
# (if and when it connects). To exercise the remote stats
|
||||
# interface, we manually publish client0's StatsProvider
|
||||
# and use client1 to query it.
|
||||
sp = self.clients[0].stats_provider
|
||||
sp_furl = self.clients[0].tub.registerReference(sp)
|
||||
d = self.clients[1].tub.getReference(sp_furl)
|
||||
d.addCallback(lambda sp_rref: sp_rref.callRemote("get_stats"))
|
||||
def _got_stats(stats):
|
||||
#print("STATS")
|
||||
#from pprint import pprint
|
||||
#pprint(stats)
|
||||
s = stats["stats"]
|
||||
self.failUnlessEqual(s["storage_server.accepting_immutable_shares"], 1)
|
||||
c = stats["counters"]
|
||||
self.failUnless("storage_server.allocate" in c)
|
||||
d.addCallback(_got_stats)
|
||||
return d
|
||||
stats = self.clients[0].stats_provider.get_stats()
|
||||
s = stats["stats"]
|
||||
self.failUnlessEqual(s["storage_server.accepting_immutable_shares"], 1)
|
||||
c = stats["counters"]
|
||||
self.failUnless("storage_server.allocate" in c)
|
||||
d.addCallback(_grab_stats)
|
||||
|
||||
return d
|
||||
@ -1629,7 +1574,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
def test_filesystem(self):
|
||||
self.basedir = "system/SystemTest/test_filesystem"
|
||||
self.data = LARGE_DATA
|
||||
d = self.set_up_nodes(use_stats_gatherer=True)
|
||||
d = self.set_up_nodes()
|
||||
def _new_happy_semantics(ign):
|
||||
for c in self.clients:
|
||||
c.encoding_params['happy'] = 1
|
||||
@ -2618,6 +2563,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
|
||||
def _run_in_subprocess(ignored, verb, *args, **kwargs):
|
||||
stdin = kwargs.get("stdin")
|
||||
# XXX https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3548
|
||||
env = kwargs.get("env", os.environ)
|
||||
# Python warnings from the child process don't matter.
|
||||
env["PYTHONWARNINGS"] = "ignore"
|
||||
|
@ -36,6 +36,7 @@ PORTED_MODULES = [
|
||||
"allmydata.crypto.util",
|
||||
"allmydata.dirnode",
|
||||
"allmydata.hashtree",
|
||||
"allmydata.immutable.checker",
|
||||
"allmydata.immutable.downloader",
|
||||
"allmydata.immutable.downloader.common",
|
||||
"allmydata.immutable.downloader.fetcher",
|
||||
@ -50,6 +51,7 @@ PORTED_MODULES = [
|
||||
"allmydata.immutable.layout",
|
||||
"allmydata.immutable.literal",
|
||||
"allmydata.immutable.offloaded",
|
||||
"allmydata.immutable.repairer",
|
||||
"allmydata.immutable.upload",
|
||||
"allmydata.interfaces",
|
||||
"allmydata.introducer.client",
|
||||
@ -157,6 +159,7 @@ PORTED_TEST_MODULES = [
|
||||
"allmydata.test.test_observer",
|
||||
"allmydata.test.test_pipeline",
|
||||
"allmydata.test.test_python3",
|
||||
"allmydata.test.test_repairer",
|
||||
"allmydata.test.test_spans",
|
||||
"allmydata.test.test_statistics",
|
||||
"allmydata.test.test_storage",
|
||||
|
@ -252,6 +252,16 @@ ESCAPABLE_UNICODE = re.compile(u'([\uD800-\uDBFF][\uDC00-\uDFFF])|' # valid sur
|
||||
|
||||
ESCAPABLE_8BIT = re.compile( br'[^ !#\x25-\x5B\x5D-\x5F\x61-\x7E]', re.DOTALL)
|
||||
|
||||
def quote_output_u(*args, **kwargs):
|
||||
"""
|
||||
Like ``quote_output`` but always return ``unicode``.
|
||||
"""
|
||||
result = quote_output(*args, **kwargs)
|
||||
if isinstance(result, unicode):
|
||||
return result
|
||||
return result.decode(kwargs.get("encoding", None) or io_encoding)
|
||||
|
||||
|
||||
def quote_output(s, quotemarks=True, quote_newlines=None, encoding=None):
|
||||
"""
|
||||
Encode either a Unicode string or a UTF-8-encoded bytestring for representation
|
||||
|
@ -12,8 +12,6 @@
|
||||
<h2>General</h2>
|
||||
|
||||
<ul>
|
||||
<li>Load Average: <t:transparent t:render="load_average" /></li>
|
||||
<li>Peak Load: <t:transparent t:render="peak_load" /></li>
|
||||
<li>Files Uploaded (immutable): <t:transparent t:render="uploads" /></li>
|
||||
<li>Files Downloaded (immutable): <t:transparent t:render="downloads" /></li>
|
||||
<li>Files Published (mutable): <t:transparent t:render="publishes" /></li>
|
||||
|
@ -1565,14 +1565,6 @@ class StatisticsElement(Element):
|
||||
# Note that `counters` can be empty.
|
||||
self._stats = provider.get_stats()
|
||||
|
||||
@renderer
|
||||
def load_average(self, req, tag):
|
||||
return tag(str(self._stats["stats"].get("load_monitor.avg_load")))
|
||||
|
||||
@renderer
|
||||
def peak_load(self, req, tag):
|
||||
return tag(str(self._stats["stats"].get("load_monitor.max_load")))
|
||||
|
||||
@renderer
|
||||
def uploads(self, req, tag):
|
||||
files = self._stats["counters"].get("uploader.files_uploaded", 0)
|
||||
|
Loading…
Reference in New Issue
Block a user