Merge remote-tracking branch 'origin/master' into 3532.test_node-no-mock

This commit is contained in:
Jean-Paul Calderone 2020-12-14 15:35:42 -05:00
commit 035cd8b4ac
35 changed files with 593 additions and 1436 deletions

View File

@ -32,3 +32,11 @@ coverage:
patch:
default:
threshold: 1%
codecov:
# This is a public repository so supposedly we don't "need" to use an upload
# token. However, using one makes sure that CI jobs running against forked
# repositories have coverage uploaded to the right place in codecov so
# their reports aren't incomplete.
token: "abf679b6-e2e6-4b33-b7b5-6cfbd41ee691"

View File

@ -273,7 +273,7 @@ Then, do the following:
[connections]
tcp = tor
* Launch the Tahoe server with ``tahoe start $NODEDIR``
* Launch the Tahoe server with ``tahoe run $NODEDIR``
The ``tub.port`` section will cause the Tahoe server to listen on PORT, but
bind the listening socket to the loopback interface, which is not reachable
@ -435,4 +435,3 @@ It is therefore important that your I2P router is sharing bandwidth with other
routers, so that you can give back as you use I2P. This will never impair the
performance of your Tahoe-LAFS node, because your I2P router will always
prioritize your own traffic.

View File

@ -365,7 +365,7 @@ set the ``tub.location`` option described below.
also generally reduced when operating in private mode.
When False, any of the following configuration problems will cause
``tahoe start`` to throw a PrivacyError instead of starting the node:
``tahoe run`` to throw a PrivacyError instead of starting the node:
* ``[node] tub.location`` contains any ``tcp:`` hints

View File

@ -85,7 +85,7 @@ Node Management
"``tahoe create-node [NODEDIR]``" is the basic make-a-new-node
command. It creates a new directory and populates it with files that
will allow the "``tahoe start``" and related commands to use it later
will allow the "``tahoe run``" and related commands to use it later
on. ``tahoe create-node`` creates nodes that have client functionality
(upload/download files), web API services (controlled by the
'[node]web.port' configuration), and storage services (unless
@ -94,8 +94,7 @@ on. ``tahoe create-node`` creates nodes that have client functionality
NODEDIR defaults to ``~/.tahoe/`` , and newly-created nodes default to
publishing a web server on port 3456 (limited to the loopback interface, at
127.0.0.1, to restrict access to other programs on the same host). All of the
other "``tahoe``" subcommands use corresponding defaults (with the exception
that "``tahoe run``" defaults to running a node in the current directory).
other "``tahoe``" subcommands use corresponding defaults.
"``tahoe create-client [NODEDIR]``" creates a node with no storage service.
That is, it behaves like "``tahoe create-node --no-storage [NODEDIR]``".
@ -117,25 +116,6 @@ the same way on all platforms and logs to stdout. If you want to run
the process as a daemon, it is recommended that you use your favourite
daemonization tool.
The now-deprecated "``tahoe start [NODEDIR]``" command will launch a
previously-created node. It will launch the node into the background
using ``tahoe daemonize`` (and internal-only command, not for user
use). On some platforms (including Windows) this command is unable to
run a daemon in the background; in that case it behaves in the same
way as "``tahoe run``". ``tahoe start`` also monitors the logs for up
to 5 seconds looking for either a succesful startup message or for
early failure messages and produces an appropriate exit code. You are
encouraged to use ``tahoe run`` along with your favourite
daemonization tool instead of this. ``tahoe start`` is maintained for
backwards compatibility of users already using it; new scripts should
depend on ``tahoe run``.
"``tahoe stop [NODEDIR]``" will shut down a running node. "``tahoe
restart [NODEDIR]``" will stop and then restart a running
node. Similar to above, you should use ``tahoe run`` instead alongside
your favourite daemonization tool.
File Store Manipulation
=======================

View File

@ -2145,7 +2145,7 @@ you could do the following::
tahoe debug dump-cap URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
-> storage index: whpepioyrnff7orecjolvbudeu
echo "whpepioyrnff7orecjolvbudeu my puppy told me to" >>$NODEDIR/access.blacklist
tahoe restart $NODEDIR
# ... restart the node to re-read configuration ...
tahoe get URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
-> error, 403 Access Prohibited: my puppy told me to

View File

@ -23,7 +23,7 @@ Config setting File Comment
``BASEDIR/introducer.furl`` ``BASEDIR/private/introducers.yaml``
``[client]helper.furl`` ``BASEDIR/helper.furl``
``[client]key_generator.furl`` ``BASEDIR/key_generator.furl``
``[client]stats_gatherer.furl`` ``BASEDIR/stats_gatherer.furl``
``BASEDIR/stats_gatherer.furl`` Stats gatherer has been removed.
``[storage]enabled`` ``BASEDIR/no_storage`` (``False`` if ``no_storage`` exists)
``[storage]readonly`` ``BASEDIR/readonly_storage`` (``True`` if ``readonly_storage`` exists)
``[storage]sizelimit`` ``BASEDIR/sizelimit``
@ -47,3 +47,10 @@ the now (since Tahoe-LAFS v1.3.0) unsupported
addresses specified in ``advertised_ip_addresses`` were used in
addition to any that were automatically discovered), whereas the new
``tahoe.cfg`` directive is not (``tub.location`` is used verbatim).
The stats gatherer has been broken at least since Tahoe-LAFS v1.13.0.
The (broken) functionality of ``[client]stats_gatherer.furl`` (which
was previously in ``BASEDIR/stats_gatherer.furl``), is scheduled to be
completely removed after Tahoe-LAFS v1.15.0. After that point, if
your configuration contains a ``[client]stats_gatherer.furl``, your
node will refuse to start.

View File

@ -128,10 +128,9 @@ provided in ``misc/incident-gatherer/support_classifiers.py`` . There is
roughly one category for each ``log.WEIRD``-or-higher level event in the
Tahoe source code.
The incident gatherer is created with the "``flogtool
create-incident-gatherer WORKDIR``" command, and started with "``tahoe
start``". The generated "``gatherer.tac``" file should be modified to add
classifier functions.
The incident gatherer is created with the "``flogtool create-incident-gatherer
WORKDIR``" command, and started with "``tahoe run``". The generated
"``gatherer.tac``" file should be modified to add classifier functions.
The incident gatherer writes incident names (which are simply the relative
pathname of the ``incident-\*.flog.bz2`` file) into ``classified/CATEGORY``.
@ -175,7 +174,7 @@ things that happened on multiple machines (such as comparing a client node
making a request with the storage servers that respond to that request).
Create the Log Gatherer with the "``flogtool create-gatherer WORKDIR``"
command, and start it with "``tahoe start``". Then copy the contents of the
command, and start it with "``twistd -ny gatherer.tac``". Then copy the contents of the
``log_gatherer.furl`` file it creates into the ``BASEDIR/tahoe.cfg`` file
(under the key ``log_gatherer.furl`` of the section ``[node]``) of all nodes
that should be sending it log events. (See :doc:`configuration`)

View File

@ -81,9 +81,7 @@ does not offer its disk space to other nodes. To configure other behavior,
use “``tahoe create-node``” or see :doc:`configuration`.
The “``tahoe run``” command above will run the node in the foreground.
On Unix, you can run it in the background instead by using the
``tahoe start``” command. To stop a node started in this way, use
``tahoe stop``”. ``tahoe --help`` gives a summary of all commands.
``tahoe --help`` gives a summary of all commands.
Running a Server or Introducer
@ -99,12 +97,10 @@ and ``--location`` arguments.
To construct an introducer, create a new base directory for it (the name
of the directory is up to you), ``cd`` into it, and run “``tahoe
create-introducer --hostname=example.net .``” (but using the hostname of
your VPS). Now run the introducer using “``tahoe start .``”. After it
your VPS). Now run the introducer using “``tahoe run .``”. After it
starts, it will write a file named ``introducer.furl`` into the
``private/`` subdirectory of that base directory. This file contains the
URL the other nodes must use in order to connect to this introducer.
(Note that “``tahoe run .``” doesn't work for introducers, this is a
known issue: `#937`_.)
You can distribute your Introducer fURL securely to new clients by using
the ``tahoe invite`` command. This will prepare some JSON to send to the

View File

@ -201,9 +201,8 @@ log_gatherer.furl = {log_furl}
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
f.write(config)
# on windows, "tahoe start" means: run forever in the foreground,
# but on linux it means daemonize. "tahoe run" is consistent
# between platforms.
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
# "start" command.
protocol = _MagicTextProtocol('introducer running')
transport = _tahoe_runner_optional_coverage(
protocol,
@ -278,9 +277,8 @@ log_gatherer.furl = {log_furl}
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
f.write(config)
# on windows, "tahoe start" means: run forever in the foreground,
# but on linux it means daemonize. "tahoe run" is consistent
# between platforms.
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
# "start" command.
protocol = _MagicTextProtocol('introducer running')
transport = _tahoe_runner_optional_coverage(
protocol,

View File

@ -189,10 +189,8 @@ def _run_node(reactor, node_dir, request, magic_text):
magic_text = "client running"
protocol = _MagicTextProtocol(magic_text)
# on windows, "tahoe start" means: run forever in the foreground,
# but on linux it means daemonize. "tahoe run" is consistent
# between platforms.
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
# "start" command.
transport = _tahoe_runner_optional_coverage(
protocol,
reactor,

0
newsfragments/3523.minor Normal file
View File

0
newsfragments/3524.minor Normal file
View File

View File

@ -0,0 +1 @@
The deprecated ``tahoe`` start, restart, stop, and daemonize sub-commands have been removed.

0
newsfragments/3553.minor Normal file
View File

0
newsfragments/3555.minor Normal file
View File

0
newsfragments/3558.minor Normal file
View File

View File

@ -1,4 +1,16 @@
"""Directory Node implementation."""
"""Directory Node implementation.
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
# Skip dict so it doesn't break things.
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, list, object, range, str, max, min # noqa: F401
from past.builtins import unicode
import time
@ -37,6 +49,8 @@ from eliot.twisted import (
NAME = Field.for_types(
u"name",
# Make sure this works on Python 2; with str, it gets Future str which
# breaks Eliot.
[unicode],
u"The name linking the parent to this node.",
)
@ -179,7 +193,7 @@ class Adder(object):
def modify(self, old_contents, servermap, first_time):
children = self.node._unpack_contents(old_contents)
now = time.time()
for (namex, (child, new_metadata)) in self.entries.iteritems():
for (namex, (child, new_metadata)) in list(self.entries.items()):
name = normalize(namex)
precondition(IFilesystemNode.providedBy(child), child)
@ -205,8 +219,8 @@ class Adder(object):
return new_contents
def _encrypt_rw_uri(writekey, rw_uri):
precondition(isinstance(rw_uri, str), rw_uri)
precondition(isinstance(writekey, str), writekey)
precondition(isinstance(rw_uri, bytes), rw_uri)
precondition(isinstance(writekey, bytes), writekey)
salt = hashutil.mutable_rwcap_salt_hash(rw_uri)
key = hashutil.mutable_rwcap_key_hash(salt, writekey)
@ -221,7 +235,7 @@ def _encrypt_rw_uri(writekey, rw_uri):
def pack_children(childrenx, writekey, deep_immutable=False):
# initial_children must have metadata (i.e. {} instead of None)
children = {}
for (namex, (node, metadata)) in childrenx.iteritems():
for (namex, (node, metadata)) in list(childrenx.items()):
precondition(isinstance(metadata, dict),
"directory creation requires metadata to be a dict, not None", metadata)
children[normalize(namex)] = (node, metadata)
@ -245,18 +259,19 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
If deep_immutable is True, I will require that all my children are deeply
immutable, and will raise a MustBeDeepImmutableError if not.
"""
precondition((writekey is None) or isinstance(writekey, str), writekey)
precondition((writekey is None) or isinstance(writekey, bytes), writekey)
has_aux = isinstance(children, AuxValueDict)
entries = []
for name in sorted(children.keys()):
assert isinstance(name, unicode)
assert isinstance(name, str)
entry = None
(child, metadata) = children[name]
child.raise_error()
if deep_immutable and not child.is_allowed_in_immutable_directory():
raise MustBeDeepImmutableError("child %s is not allowed in an immutable directory" %
quote_output(name, encoding='utf-8'), name)
raise MustBeDeepImmutableError(
"child %r is not allowed in an immutable directory" % (name,),
name)
if has_aux:
entry = children.get_aux(name)
if not entry:
@ -264,26 +279,26 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
assert isinstance(metadata, dict)
rw_uri = child.get_write_uri()
if rw_uri is None:
rw_uri = ""
assert isinstance(rw_uri, str), rw_uri
rw_uri = b""
assert isinstance(rw_uri, bytes), rw_uri
# should be prevented by MustBeDeepImmutableError check above
assert not (rw_uri and deep_immutable)
ro_uri = child.get_readonly_uri()
if ro_uri is None:
ro_uri = ""
assert isinstance(ro_uri, str), ro_uri
ro_uri = b""
assert isinstance(ro_uri, bytes), ro_uri
if writekey is not None:
writecap = netstring(_encrypt_rw_uri(writekey, rw_uri))
else:
writecap = ZERO_LEN_NETSTR
entry = "".join([netstring(name.encode("utf-8")),
entry = b"".join([netstring(name.encode("utf-8")),
netstring(strip_prefix_for_ro(ro_uri, deep_immutable)),
writecap,
netstring(json.dumps(metadata))])
netstring(json.dumps(metadata).encode("utf-8"))])
entries.append(netstring(entry))
return "".join(entries)
return b"".join(entries)
@implementer(IDirectoryNode, ICheckable, IDeepCheckable)
class DirectoryNode(object):
@ -352,9 +367,9 @@ class DirectoryNode(object):
# cleartext. The 'name' is UTF-8 encoded, and should be normalized to NFC.
# The rwcapdata is formatted as:
# pack("16ss32s", iv, AES(H(writekey+iv), plaintext_rw_uri), mac)
assert isinstance(data, str), (repr(data), type(data))
assert isinstance(data, bytes), (repr(data), type(data))
# an empty directory is serialized as an empty string
if data == "":
if data == b"":
return AuxValueDict()
writeable = not self.is_readonly()
mutable = self.is_mutable()
@ -373,7 +388,7 @@ class DirectoryNode(object):
# Therefore we normalize names going both in and out of directories.
name = normalize(namex_utf8.decode("utf-8"))
rw_uri = ""
rw_uri = b""
if writeable:
rw_uri = self._decrypt_rwcapdata(rwcapdata)
@ -384,8 +399,8 @@ class DirectoryNode(object):
# ro_uri is treated in the same way for consistency.
# rw_uri and ro_uri will be either None or a non-empty string.
rw_uri = rw_uri.rstrip(' ') or None
ro_uri = ro_uri.rstrip(' ') or None
rw_uri = rw_uri.rstrip(b' ') or None
ro_uri = ro_uri.rstrip(b' ') or None
try:
child = self._create_and_validate_node(rw_uri, ro_uri, name)
@ -468,7 +483,7 @@ class DirectoryNode(object):
exists a child of the given name, False if not."""
name = normalize(namex)
d = self._read()
d.addCallback(lambda children: children.has_key(name))
d.addCallback(lambda children: name in children)
return d
def _get(self, children, name):
@ -543,7 +558,7 @@ class DirectoryNode(object):
else:
pathx = pathx.split("/")
for p in pathx:
assert isinstance(p, unicode), p
assert isinstance(p, str), p
childnamex = pathx[0]
remaining_pathx = pathx[1:]
if remaining_pathx:
@ -555,8 +570,8 @@ class DirectoryNode(object):
return d
def set_uri(self, namex, writecap, readcap, metadata=None, overwrite=True):
precondition(isinstance(writecap, (str,type(None))), writecap)
precondition(isinstance(readcap, (str,type(None))), readcap)
precondition(isinstance(writecap, (bytes, type(None))), writecap)
precondition(isinstance(readcap, (bytes, type(None))), readcap)
# We now allow packing unknown nodes, provided they are valid
# for this type of directory.
@ -569,16 +584,16 @@ class DirectoryNode(object):
# this takes URIs
a = Adder(self, overwrite=overwrite,
create_readonly_node=self._create_readonly_node)
for (namex, e) in entries.iteritems():
assert isinstance(namex, unicode), namex
for (namex, e) in entries.items():
assert isinstance(namex, str), namex
if len(e) == 2:
writecap, readcap = e
metadata = None
else:
assert len(e) == 3
writecap, readcap, metadata = e
precondition(isinstance(writecap, (str,type(None))), writecap)
precondition(isinstance(readcap, (str,type(None))), readcap)
precondition(isinstance(writecap, (bytes,type(None))), writecap)
precondition(isinstance(readcap, (bytes,type(None))), readcap)
# We now allow packing unknown nodes, provided they are valid
# for this type of directory.
@ -779,7 +794,7 @@ class DirectoryNode(object):
# in the nodecache) seem to consume about 2000 bytes.
dirkids = []
filekids = []
for name, (child, metadata) in sorted(children.iteritems()):
for name, (child, metadata) in sorted(children.items()):
childpath = path + [name]
if isinstance(child, UnknownNode):
walker.add_node(child, childpath)

View File

@ -1,3 +1,15 @@
"""
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future.utils import PY2
if PY2:
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
import weakref
from zope.interface import implementer
from allmydata.util.assertutil import precondition
@ -126,7 +138,7 @@ class NodeMaker(object):
def create_new_mutable_directory(self, initial_children={}, version=None):
# initial_children must have metadata (i.e. {} instead of None)
for (name, (node, metadata)) in initial_children.iteritems():
for (name, (node, metadata)) in initial_children.items():
precondition(isinstance(metadata, dict),
"create_new_mutable_directory requires metadata to be a dict, not None", metadata)
node.raise_error()

View File

@ -37,7 +37,7 @@ class BaseOptions(usage.Options):
super(BaseOptions, self).__init__()
self.command_name = os.path.basename(sys.argv[0])
# Only allow "tahoe --version", not e.g. "tahoe start --version"
# Only allow "tahoe --version", not e.g. "tahoe <cmd> --version"
def opt_version(self):
raise usage.UsageError("--version not allowed on subcommands")

View File

@ -1,269 +0,0 @@
from __future__ import print_function
import os, sys
from allmydata.scripts.common import BasedirOptions
from twisted.scripts import twistd
from twisted.python import usage
from twisted.python.reflect import namedAny
from twisted.internet.defer import maybeDeferred, fail
from twisted.application.service import Service
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util import fileutil
from allmydata.node import (
PortAssignmentRequired,
PrivacyError,
)
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
from allmydata.util.configutil import UnknownConfigError
from allmydata.util.deferredutil import HookMixin
def get_pidfile(basedir):
"""
Returns the path to the PID file.
:param basedir: the node's base directory
:returns: the path to the PID file
"""
return os.path.join(basedir, u"twistd.pid")
def get_pid_from_pidfile(pidfile):
"""
Tries to read and return the PID stored in the node's PID file
(twistd.pid).
:param pidfile: try to read this PID file
:returns: A numeric PID on success, ``None`` if PID file absent or
inaccessible, ``-1`` if PID file invalid.
"""
try:
with open(pidfile, "r") as f:
pid = f.read()
except EnvironmentError:
return None
try:
pid = int(pid)
except ValueError:
return -1
return pid
def identify_node_type(basedir):
"""
:return unicode: None or one of: 'client', 'introducer', or
'key-generator'
"""
tac = u''
try:
for fn in listdir_unicode(basedir):
if fn.endswith(u".tac"):
tac = fn
break
except OSError:
return None
for t in (u"client", u"introducer", u"key-generator"):
if t in tac:
return t
return None
class RunOptions(BasedirOptions):
optParameters = [
("basedir", "C", None,
"Specify which Tahoe base directory should be used."
" This has the same effect as the global --node-directory option."
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
]
def parseArgs(self, basedir=None, *twistd_args):
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
# looks like an option to the tahoe subcommand, not to twistd. So you
# can either use 'tahoe start' or 'tahoe start NODEDIR
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
BasedirOptions.parseArgs(self, basedir)
self.twistd_args = twistd_args
def getSynopsis(self):
return ("Usage: %s [global-options] %s [options]"
" [NODEDIR [twistd-options]]"
% (self.command_name, self.subcommand_name))
def getUsage(self, width=None):
t = BasedirOptions.getUsage(self, width) + "\n"
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
t += twistd_options.replace("Options:", "twistd-options:", 1)
t += """
Note that if any twistd-options are used, NODEDIR must be specified explicitly
(not by default or using -C/--basedir or -d/--node-directory), and followed by
the twistd-options.
"""
return t
class MyTwistdConfig(twistd.ServerOptions):
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
stderr = sys.stderr
class DaemonizeTheRealService(Service, HookMixin):
"""
this HookMixin should really be a helper; our hooks:
- 'running': triggered when startup has completed; it triggers
with None of successful or a Failure otherwise.
"""
stderr = sys.stderr
def __init__(self, nodetype, basedir, options):
super(DaemonizeTheRealService, self).__init__()
self.nodetype = nodetype
self.basedir = basedir
# setup for HookMixin
self._hooks = {
"running": None,
}
self.stderr = options.parent.stderr
def startService(self):
def key_generator_removed():
return fail(ValueError("key-generator support removed, see #2783"))
def start():
node_to_instance = {
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
u"key-generator": key_generator_removed,
}
try:
service_factory = node_to_instance[self.nodetype]
except KeyError:
raise ValueError("unknown nodetype %s" % self.nodetype)
def handle_config_error(fail):
if fail.check(UnknownConfigError):
self.stderr.write("\nConfiguration error:\n{}\n\n".format(fail.value))
elif fail.check(PortAssignmentRequired):
self.stderr.write("\ntub.port cannot be 0: you must choose.\n\n")
elif fail.check(PrivacyError):
self.stderr.write("\n{}\n\n".format(fail.value))
else:
self.stderr.write("\nUnknown error\n")
fail.printTraceback(self.stderr)
reactor.stop()
d = service_factory()
def created(srv):
srv.setServiceParent(self.parent)
d.addCallback(created)
d.addErrback(handle_config_error)
d.addBoth(self._call_hook, 'running')
return d
from twisted.internet import reactor
reactor.callWhenRunning(start)
class DaemonizeTahoeNodePlugin(object):
tapname = "tahoenode"
def __init__(self, nodetype, basedir):
self.nodetype = nodetype
self.basedir = basedir
def makeService(self, so):
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
def run(config):
"""
Runs a Tahoe-LAFS node in the foreground.
Sets up the IService instance corresponding to the type of node
that's starting and uses Twisted's twistd runner to disconnect our
process from the terminal.
"""
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
if not os.path.isdir(basedir):
print("%s does not look like a directory at all" % quoted_basedir, file=err)
return 1
nodetype = identify_node_type(basedir)
if not nodetype:
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
return 1
# Now prepare to turn into a twistd process. This os.chdir is the point
# of no return.
os.chdir(basedir)
twistd_args = []
if (nodetype in (u"client", u"introducer")
and "--nodaemon" not in config.twistd_args
and "--syslog" not in config.twistd_args
and "--logfile" not in config.twistd_args):
fileutil.make_dirs(os.path.join(basedir, u"logs"))
twistd_args.extend(["--logfile", os.path.join("logs", "twistd.log")])
twistd_args.extend(config.twistd_args)
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
twistd_config = MyTwistdConfig()
twistd_config.stdout = out
twistd_config.stderr = err
try:
twistd_config.parseOptions(twistd_args)
except usage.error as ue:
# these arguments were unsuitable for 'twistd'
print(config, file=err)
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
return 1
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
# handle invalid PID file (twistd might not start otherwise)
pidfile = get_pidfile(basedir)
if get_pid_from_pidfile(pidfile) == -1:
print("found invalid PID file in %s - deleting it" % basedir, file=err)
os.remove(pidfile)
# On Unix-like platforms:
# Unless --nodaemon was provided, the twistd.runApp() below spawns off a
# child process, and the parent calls os._exit(0), so there's no way for
# us to get control afterwards, even with 'except SystemExit'. If
# application setup fails (e.g. ImportError), runApp() will raise an
# exception.
#
# So if we wanted to do anything with the running child, we'd have two
# options:
#
# * fork first, and have our child wait for the runApp() child to get
# running. (note: just fork(). This is easier than fork+exec, since we
# don't have to get PATH and PYTHONPATH set up, since we're not
# starting a *different* process, just cloning a new instance of the
# current process)
# * or have the user run a separate command some time after this one
# exits.
#
# For Tahoe, we don't need to do anything with the child, so we can just
# let it exit.
#
# On Windows:
# twistd does not fork; it just runs in the current process whether or not
# --nodaemon is specified. (As on Unix, --nodaemon does have the side effect
# of causing us to log to stdout/stderr.)
if "--nodaemon" in twistd_args or sys.platform == "win32":
verb = "running"
else:
verb = "starting"
print("%s node in %s" % (verb, quoted_basedir), file=out)
twistd.runApp(twistd_config)
# we should only reach here if --nodaemon or equivalent was used
return 0

View File

@ -9,8 +9,7 @@ from twisted.internet import defer, task, threads
from allmydata.scripts.common import get_default_nodedir
from allmydata.scripts import debug, create_node, cli, \
admin, tahoe_daemonize, tahoe_start, \
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
admin, tahoe_run, tahoe_invite
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
from allmydata.util.eliotutil import (
opt_eliot_destination,
@ -37,19 +36,11 @@ if _default_nodedir:
# XXX all this 'dispatch' stuff needs to be unified + fixed up
_control_node_dispatch = {
"daemonize": tahoe_daemonize.daemonize,
"start": tahoe_start.start,
"run": tahoe_run.run,
"stop": tahoe_stop.stop,
"restart": tahoe_restart.restart,
}
process_control_commands = [
["run", None, tahoe_run.RunOptions, "run a node without daemonizing"],
["daemonize", None, tahoe_daemonize.DaemonizeOptions, "(deprecated) run a node in the background"],
["start", None, tahoe_start.StartOptions, "(deprecated) start a node in the background and confirm it started"],
["stop", None, tahoe_stop.StopOptions, "(deprecated) stop a node"],
["restart", None, tahoe_restart.RestartOptions, "(deprecated) restart a node"],
]

View File

@ -1,16 +0,0 @@
from .run_common import (
RunOptions as _RunOptions,
run,
)
__all__ = [
"DaemonizeOptions",
"daemonize",
]
class DaemonizeOptions(_RunOptions):
subcommand_name = "daemonize"
def daemonize(config):
print("'tahoe daemonize' is deprecated; see 'tahoe run'")
return run(config)

View File

@ -1,21 +0,0 @@
from __future__ import print_function
from .tahoe_start import StartOptions, start
from .tahoe_stop import stop, COULD_NOT_STOP
class RestartOptions(StartOptions):
subcommand_name = "restart"
def restart(config):
print("'tahoe restart' is deprecated; see 'tahoe run'")
stderr = config.stderr
rc = stop(config)
if rc == COULD_NOT_STOP:
print("ignoring couldn't-stop", file=stderr)
rc = 0
if rc:
print("not restarting", file=stderr)
return rc
return start(config)

View File

@ -1,15 +1,233 @@
from .run_common import (
RunOptions as _RunOptions,
run,
)
from __future__ import print_function
__all__ = [
"RunOptions",
"run",
]
class RunOptions(_RunOptions):
import os, sys
from allmydata.scripts.common import BasedirOptions
from twisted.scripts import twistd
from twisted.python import usage
from twisted.python.reflect import namedAny
from twisted.internet.defer import maybeDeferred
from twisted.application.service import Service
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
from allmydata.util.configutil import UnknownConfigError
from allmydata.util.deferredutil import HookMixin
from allmydata.node import (
PortAssignmentRequired,
PrivacyError,
)
def get_pidfile(basedir):
"""
Returns the path to the PID file.
:param basedir: the node's base directory
:returns: the path to the PID file
"""
return os.path.join(basedir, u"twistd.pid")
def get_pid_from_pidfile(pidfile):
"""
Tries to read and return the PID stored in the node's PID file
(twistd.pid).
:param pidfile: try to read this PID file
:returns: A numeric PID on success, ``None`` if PID file absent or
inaccessible, ``-1`` if PID file invalid.
"""
try:
with open(pidfile, "r") as f:
pid = f.read()
except EnvironmentError:
return None
try:
pid = int(pid)
except ValueError:
return -1
return pid
def identify_node_type(basedir):
"""
:return unicode: None or one of: 'client' or 'introducer'.
"""
tac = u''
try:
for fn in listdir_unicode(basedir):
if fn.endswith(u".tac"):
tac = fn
break
except OSError:
return None
for t in (u"client", u"introducer"):
if t in tac:
return t
return None
class RunOptions(BasedirOptions):
subcommand_name = "run"
def postOptions(self):
self.twistd_args += ("--nodaemon",)
optParameters = [
("basedir", "C", None,
"Specify which Tahoe base directory should be used."
" This has the same effect as the global --node-directory option."
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
]
def parseArgs(self, basedir=None, *twistd_args):
# This can't handle e.g. 'tahoe run --reactor=foo', since
# '--reactor=foo' looks like an option to the tahoe subcommand, not to
# twistd. So you can either use 'tahoe run' or 'tahoe run NODEDIR
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR run
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
BasedirOptions.parseArgs(self, basedir)
self.twistd_args = twistd_args
def getSynopsis(self):
return ("Usage: %s [global-options] %s [options]"
" [NODEDIR [twistd-options]]"
% (self.command_name, self.subcommand_name))
def getUsage(self, width=None):
t = BasedirOptions.getUsage(self, width) + "\n"
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
t += twistd_options.replace("Options:", "twistd-options:", 1)
t += """
Note that if any twistd-options are used, NODEDIR must be specified explicitly
(not by default or using -C/--basedir or -d/--node-directory), and followed by
the twistd-options.
"""
return t
class MyTwistdConfig(twistd.ServerOptions):
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
stderr = sys.stderr
class DaemonizeTheRealService(Service, HookMixin):
"""
this HookMixin should really be a helper; our hooks:
- 'running': triggered when startup has completed; it triggers
with None of successful or a Failure otherwise.
"""
stderr = sys.stderr
def __init__(self, nodetype, basedir, options):
super(DaemonizeTheRealService, self).__init__()
self.nodetype = nodetype
self.basedir = basedir
# setup for HookMixin
self._hooks = {
"running": None,
}
self.stderr = options.parent.stderr
def startService(self):
def start():
node_to_instance = {
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
}
try:
service_factory = node_to_instance[self.nodetype]
except KeyError:
raise ValueError("unknown nodetype %s" % self.nodetype)
def handle_config_error(reason):
if reason.check(UnknownConfigError):
self.stderr.write("\nConfiguration error:\n{}\n\n".format(reason.value))
elif reason.check(PortAssignmentRequired):
self.stderr.write("\ntub.port cannot be 0: you must choose.\n\n")
elif reason.check(PrivacyError):
self.stderr.write("\n{}\n\n".format(reason.value))
else:
self.stderr.write("\nUnknown error\n")
reason.printTraceback(self.stderr)
reactor.stop()
d = service_factory()
def created(srv):
srv.setServiceParent(self.parent)
d.addCallback(created)
d.addErrback(handle_config_error)
d.addBoth(self._call_hook, 'running')
return d
from twisted.internet import reactor
reactor.callWhenRunning(start)
class DaemonizeTahoeNodePlugin(object):
tapname = "tahoenode"
def __init__(self, nodetype, basedir):
self.nodetype = nodetype
self.basedir = basedir
def makeService(self, so):
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
def run(config):
"""
Runs a Tahoe-LAFS node in the foreground.
Sets up the IService instance corresponding to the type of node
that's starting and uses Twisted's twistd runner to disconnect our
process from the terminal.
"""
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
if not os.path.isdir(basedir):
print("%s does not look like a directory at all" % quoted_basedir, file=err)
return 1
nodetype = identify_node_type(basedir)
if not nodetype:
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
return 1
# Now prepare to turn into a twistd process. This os.chdir is the point
# of no return.
os.chdir(basedir)
twistd_args = ["--nodaemon"]
twistd_args.extend(config.twistd_args)
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
twistd_config = MyTwistdConfig()
twistd_config.stdout = out
twistd_config.stderr = err
try:
twistd_config.parseOptions(twistd_args)
except usage.error as ue:
# these arguments were unsuitable for 'twistd'
print(config, file=err)
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
return 1
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
# handle invalid PID file (twistd might not start otherwise)
pidfile = get_pidfile(basedir)
if get_pid_from_pidfile(pidfile) == -1:
print("found invalid PID file in %s - deleting it" % basedir, file=err)
os.remove(pidfile)
# We always pass --nodaemon so twistd.runApp does not daemonize.
print("running node in %s" % (quoted_basedir,), file=out)
twistd.runApp(twistd_config)
return 0

View File

@ -1,152 +0,0 @@
from __future__ import print_function
import os
import io
import sys
import time
import subprocess
from os.path import join, exists
from allmydata.scripts.common import BasedirOptions
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util.encodingutil import quote_local_unicode_path
from .run_common import MyTwistdConfig, identify_node_type
class StartOptions(BasedirOptions):
subcommand_name = "start"
optParameters = [
("basedir", "C", None,
"Specify which Tahoe base directory should be used."
" This has the same effect as the global --node-directory option."
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
]
def parseArgs(self, basedir=None, *twistd_args):
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
# looks like an option to the tahoe subcommand, not to twistd. So you
# can either use 'tahoe start' or 'tahoe start NODEDIR
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
BasedirOptions.parseArgs(self, basedir)
self.twistd_args = twistd_args
def getSynopsis(self):
return ("Usage: %s [global-options] %s [options]"
" [NODEDIR [twistd-options]]"
% (self.command_name, self.subcommand_name))
def getUsage(self, width=None):
t = BasedirOptions.getUsage(self, width) + "\n"
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
t += twistd_options.replace("Options:", "twistd-options:", 1)
t += """
Note that if any twistd-options are used, NODEDIR must be specified explicitly
(not by default or using -C/--basedir or -d/--node-directory), and followed by
the twistd-options.
"""
return t
def start(config):
"""
Start a tahoe node (daemonize it and confirm startup)
We run 'tahoe daemonize' with all the options given to 'tahoe
start' and then watch the log files for the correct text to appear
(e.g. "introducer started"). If that doesn't happen within a few
seconds, an error is printed along with all collected logs.
"""
print("'tahoe start' is deprecated; see 'tahoe run'")
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("STARTING", quoted_basedir, file=out)
if not os.path.isdir(basedir):
print("%s does not look like a directory at all" % quoted_basedir, file=err)
return 1
nodetype = identify_node_type(basedir)
if not nodetype:
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
return 1
# "tahoe start" attempts to monitor the logs for successful
# startup -- but we can't always do that.
can_monitor_logs = False
if (nodetype in (u"client", u"introducer")
and "--nodaemon" not in config.twistd_args
and "--syslog" not in config.twistd_args
and "--logfile" not in config.twistd_args):
can_monitor_logs = True
if "--help" in config.twistd_args:
return 0
if not can_monitor_logs:
print("Custom logging options; can't monitor logs for proper startup messages", file=out)
return 1
# before we spawn tahoe, we check if "the log file" exists or not,
# and if so remember how big it is -- essentially, we're doing
# "tail -f" to see what "this" incarnation of "tahoe daemonize"
# spews forth.
starting_offset = 0
log_fname = join(basedir, 'logs', 'twistd.log')
if exists(log_fname):
with open(log_fname, 'r') as f:
f.seek(0, 2)
starting_offset = f.tell()
# spawn tahoe. Note that since this daemonizes, it should return
# "pretty fast" and with a zero return-code, or else something
# Very Bad has happened.
try:
args = [sys.executable] if not getattr(sys, 'frozen', False) else []
for i, arg in enumerate(sys.argv):
if arg in ['start', 'restart']:
args.append('daemonize')
else:
args.append(arg)
subprocess.check_call(args)
except subprocess.CalledProcessError as e:
return e.returncode
# now, we have to determine if tahoe has actually started up
# successfully or not. so, we start sucking up log files and
# looking for "the magic string", which depends on the node type.
magic_string = u'{} running'.format(nodetype)
with io.open(log_fname, 'r') as f:
f.seek(starting_offset)
collected = u''
overall_start = time.time()
while time.time() - overall_start < 60:
this_start = time.time()
while time.time() - this_start < 5:
collected += f.read()
if magic_string in collected:
if not config.parent['quiet']:
print("Node has started successfully", file=out)
return 0
if 'Traceback ' in collected:
print("Error starting node; see '{}' for more:\n\n{}".format(
log_fname,
collected,
), file=err)
return 1
time.sleep(0.1)
print("Still waiting up to {}s for node startup".format(
60 - int(time.time() - overall_start)
), file=out)
print("Something has gone wrong starting the node.", file=out)
print("Logs are available in '{}'".format(log_fname), file=out)
print("Collected for this run:", file=out)
print(collected, file=out)
return 1

View File

@ -1,85 +0,0 @@
from __future__ import print_function
import os
import time
import signal
from allmydata.scripts.common import BasedirOptions
from allmydata.util.encodingutil import quote_local_unicode_path
from .run_common import get_pidfile, get_pid_from_pidfile
COULD_NOT_STOP = 2
class StopOptions(BasedirOptions):
def parseArgs(self, basedir=None):
BasedirOptions.parseArgs(self, basedir)
def getSynopsis(self):
return ("Usage: %s [global-options] stop [options] [NODEDIR]"
% (self.command_name,))
def stop(config):
print("'tahoe stop' is deprecated; see 'tahoe run'")
out = config.stdout
err = config.stderr
basedir = config['basedir']
quoted_basedir = quote_local_unicode_path(basedir)
print("STOPPING", quoted_basedir, file=out)
pidfile = get_pidfile(basedir)
pid = get_pid_from_pidfile(pidfile)
if pid is None:
print("%s does not look like a running node directory (no twistd.pid)" % quoted_basedir, file=err)
# we define rc=2 to mean "nothing is running, but it wasn't me who
# stopped it"
return COULD_NOT_STOP
elif pid == -1:
print("%s contains an invalid PID file" % basedir, file=err)
# we define rc=2 to mean "nothing is running, but it wasn't me who
# stopped it"
return COULD_NOT_STOP
# kill it hard (SIGKILL), delete the twistd.pid file, then wait for the
# process itself to go away. If it hasn't gone away after 20 seconds, warn
# the user but keep waiting until they give up.
try:
os.kill(pid, signal.SIGKILL)
except OSError as oserr:
if oserr.errno == 3:
print(oserr.strerror)
# the process didn't exist, so wipe the pid file
os.remove(pidfile)
return COULD_NOT_STOP
else:
raise
try:
os.remove(pidfile)
except EnvironmentError:
pass
start = time.time()
time.sleep(0.1)
wait = 40
first_time = True
while True:
# poll once per second until we see the process is no longer running
try:
os.kill(pid, 0)
except OSError:
print("process %d is dead" % pid, file=out)
return
wait -= 1
if wait < 0:
if first_time:
print("It looks like pid %d is still running "
"after %d seconds" % (pid,
(time.time() - start)), file=err)
print("I will keep watching it until you interrupt me.", file=err)
wait = 10
first_time = False
else:
print("pid %d still running after %d seconds" % \
(pid, (time.time() - start)), file=err)
wait = 10
time.sleep(1)
# control never reaches here: no timeout

View File

@ -16,22 +16,19 @@ that this script does not import anything from tahoe directly, so it doesn't
matter what its PYTHONPATH is, as long as the bin/tahoe that it uses is
functional.
This script expects that the client node will be not running when the script
starts, but it will forcibly shut down the node just to be sure. It will shut
down the node after the test finishes.
This script expects the client node to be running already.
To set up the client node, do the following:
tahoe create-client DIR
populate DIR/introducer.furl
tahoe start DIR
tahoe add-alias -d DIR testgrid `tahoe mkdir -d DIR`
pick a 10kB-ish test file, compute its md5sum
tahoe put -d DIR FILE testgrid:old.MD5SUM
tahoe put -d DIR FILE testgrid:recent.MD5SUM
tahoe put -d DIR FILE testgrid:recentdir/recent.MD5SUM
echo "" | tahoe put -d DIR --mutable testgrid:log
echo "" | tahoe put -d DIR --mutable testgrid:recentlog
tahoe create-client --introducer=INTRODUCER_FURL DIR
tahoe run DIR
tahoe -d DIR create-alias testgrid
# pick a 10kB-ish test file, compute its md5sum
tahoe -d DIR put FILE testgrid:old.MD5SUM
tahoe -d DIR put FILE testgrid:recent.MD5SUM
tahoe -d DIR put FILE testgrid:recentdir/recent.MD5SUM
echo "" | tahoe -d DIR put --mutable - testgrid:log
echo "" | tahoe -d DIR put --mutable - testgrid:recentlog
This script will perform the following steps (the kind of compatibility that
is being tested is in [brackets]):
@ -52,7 +49,6 @@ is being tested is in [brackets]):
This script will also keep track of speeds and latencies and will write them
in a machine-readable logfile.
"""
import time, subprocess, md5, os.path, random
@ -104,26 +100,13 @@ class GridTester(object):
def cli(self, cmd, *args, **kwargs):
print("tahoe", cmd, " ".join(args))
stdout, stderr = self.command(self.tahoe, cmd, "-d", self.nodedir,
stdout, stderr = self.command(self.tahoe, "-d", self.nodedir, cmd,
*args, **kwargs)
if not kwargs.get("ignore_stderr", False) and stderr != "":
raise CommandFailed("command '%s' had stderr: %s" % (" ".join(args),
stderr))
return stdout
def stop_old_node(self):
print("tahoe stop", self.nodedir, "(force)")
self.command(self.tahoe, "stop", self.nodedir, expected_rc=None)
def start_node(self):
print("tahoe start", self.nodedir)
self.command(self.tahoe, "start", self.nodedir)
time.sleep(5)
def stop_node(self):
print("tahoe stop", self.nodedir)
self.command(self.tahoe, "stop", self.nodedir)
def read_and_check(self, f):
expected_md5_s = f[f.find(".")+1:]
out = self.cli("get", "testgrid:" + f)
@ -204,19 +187,11 @@ class GridTester(object):
fn = prefix + "." + md5sum
return fn, data
def run(self):
self.stop_old_node()
self.start_node()
try:
self.do_test()
finally:
self.stop_node()
def main():
config = GridTesterOptions()
config.parseOptions()
gt = GridTester(config)
gt.run()
gt.do_test()
if __name__ == "__main__":
main()

View File

@ -20,14 +20,14 @@ from allmydata.scripts.common_http import socket_error
import allmydata.scripts.common_http
# Test that the scripts can be imported.
from allmydata.scripts import create_node, debug, tahoe_start, tahoe_restart, \
from allmydata.scripts import create_node, debug, \
tahoe_add_alias, tahoe_backup, tahoe_check, tahoe_cp, tahoe_get, tahoe_ls, \
tahoe_manifest, tahoe_mkdir, tahoe_mv, tahoe_put, tahoe_unlink, tahoe_webopen, \
tahoe_stop, tahoe_daemonize, tahoe_run
_hush_pyflakes = [create_node, debug, tahoe_start, tahoe_restart, tahoe_stop,
tahoe_run
_hush_pyflakes = [create_node, debug,
tahoe_add_alias, tahoe_backup, tahoe_check, tahoe_cp, tahoe_get, tahoe_ls,
tahoe_manifest, tahoe_mkdir, tahoe_mv, tahoe_put, tahoe_unlink, tahoe_webopen,
tahoe_daemonize, tahoe_run]
tahoe_run]
from allmydata.scripts import common
from allmydata.scripts.common import DEFAULT_ALIAS, get_aliases, get_alias, \
@ -626,18 +626,6 @@ class Help(unittest.TestCase):
help = str(cli.ListAliasesOptions())
self.failUnlessIn("[options]", help)
def test_start(self):
help = str(tahoe_start.StartOptions())
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
def test_stop(self):
help = str(tahoe_stop.StopOptions())
self.failUnlessIn("[options] [NODEDIR]", help)
def test_restart(self):
help = str(tahoe_restart.RestartOptions())
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
def test_run(self):
help = str(tahoe_run.RunOptions())
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
@ -1269,82 +1257,69 @@ class Options(ReallyEqualMixin, unittest.TestCase):
self.failUnlessIn(allmydata.__full_version__, stdout.getvalue())
# but "tahoe SUBCOMMAND --version" should be rejected
self.failUnlessRaises(usage.UsageError, self.parse,
["start", "--version"])
["run", "--version"])
self.failUnlessRaises(usage.UsageError, self.parse,
["start", "--version-and-path"])
["run", "--version-and-path"])
def test_quiet(self):
# accepted as an overall option, but not on subcommands
o = self.parse(["--quiet", "start"])
o = self.parse(["--quiet", "run"])
self.failUnless(o.parent["quiet"])
self.failUnlessRaises(usage.UsageError, self.parse,
["start", "--quiet"])
["run", "--quiet"])
def test_basedir(self):
# accept a --node-directory option before the verb, or a --basedir
# option after, or a basedir argument after, but none in the wrong
# place, and not more than one of the three.
o = self.parse(["start"])
# Here is some option twistd recognizes but we don't. Depending on
# where it appears, it should be passed through to twistd. It doesn't
# really matter which option it is (it doesn't even have to be a valid
# option). This test does not actually run any of the twistd argument
# parsing.
some_twistd_option = "--spew"
o = self.parse(["run"])
self.failUnlessReallyEqual(o["basedir"], os.path.join(fileutil.abspath_expanduser_unicode(u"~"),
u".tahoe"))
o = self.parse(["start", "here"])
o = self.parse(["run", "here"])
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"here"))
o = self.parse(["start", "--basedir", "there"])
o = self.parse(["run", "--basedir", "there"])
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"there"))
o = self.parse(["--node-directory", "there", "start"])
o = self.parse(["--node-directory", "there", "run"])
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"there"))
o = self.parse(["start", "here", "--nodaemon"])
o = self.parse(["run", "here", some_twistd_option])
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"here"))
self.failUnlessRaises(usage.UsageError, self.parse,
["--basedir", "there", "start"])
["--basedir", "there", "run"])
self.failUnlessRaises(usage.UsageError, self.parse,
["start", "--node-directory", "there"])
["run", "--node-directory", "there"])
self.failUnlessRaises(usage.UsageError, self.parse,
["--node-directory=there",
"start", "--basedir=here"])
"run", "--basedir=here"])
self.failUnlessRaises(usage.UsageError, self.parse,
["start", "--basedir=here", "anywhere"])
["run", "--basedir=here", "anywhere"])
self.failUnlessRaises(usage.UsageError, self.parse,
["--node-directory=there",
"start", "anywhere"])
"run", "anywhere"])
self.failUnlessRaises(usage.UsageError, self.parse,
["--node-directory=there",
"start", "--basedir=here", "anywhere"])
"run", "--basedir=here", "anywhere"])
self.failUnlessRaises(usage.UsageError, self.parse,
["--node-directory=there", "start", "--nodaemon"])
["--node-directory=there", "run", some_twistd_option])
self.failUnlessRaises(usage.UsageError, self.parse,
["start", "--basedir=here", "--nodaemon"])
["run", "--basedir=here", some_twistd_option])
class Stop(unittest.TestCase):
def test_non_numeric_pid(self):
"""
If the pidfile exists but does not contain a numeric value, a complaint to
this effect is written to stderr and the non-success result is
returned.
"""
basedir = FilePath(self.mktemp().decode("ascii"))
basedir.makedirs()
basedir.child(u"twistd.pid").setContent(b"foo")
class Run(unittest.TestCase):
config = tahoe_stop.StopOptions()
config.stdout = StringIO()
config.stderr = StringIO()
config['basedir'] = basedir.path
result_code = tahoe_stop.stop(config)
self.assertEqual(2, result_code)
self.assertIn("invalid PID file", config.stderr.getvalue())
class Start(unittest.TestCase):
@patch('allmydata.scripts.run_common.os.chdir')
@patch('allmydata.scripts.run_common.twistd')
@patch('allmydata.scripts.tahoe_run.os.chdir')
@patch('allmydata.scripts.tahoe_run.twistd')
def test_non_numeric_pid(self, mock_twistd, chdir):
"""
If the pidfile exists but does not contain a numeric value, a complaint to
@ -1355,13 +1330,13 @@ class Start(unittest.TestCase):
basedir.child(u"twistd.pid").setContent(b"foo")
basedir.child(u"tahoe-client.tac").setContent(b"")
config = tahoe_daemonize.DaemonizeOptions()
config = tahoe_run.RunOptions()
config.stdout = StringIO()
config.stderr = StringIO()
config['basedir'] = basedir.path
config.twistd_args = []
result_code = tahoe_daemonize.daemonize(config)
result_code = tahoe_run.run(config)
self.assertIn("invalid PID file", config.stderr.getvalue())
self.assertTrue(len(mock_twistd.mock_calls), 1)
self.assertEqual(mock_twistd.mock_calls[0][0], 'runApp')

View File

@ -1,202 +0,0 @@
import os
from io import (
BytesIO,
)
from os.path import dirname, join
from mock import patch, Mock
from six.moves import StringIO
from sys import getfilesystemencoding
from twisted.trial import unittest
from allmydata.scripts import runner
from allmydata.scripts.run_common import (
identify_node_type,
DaemonizeTahoeNodePlugin,
MyTwistdConfig,
)
from allmydata.scripts.tahoe_daemonize import (
DaemonizeOptions,
)
class Util(unittest.TestCase):
def setUp(self):
self.twistd_options = MyTwistdConfig()
self.twistd_options.parseOptions(["DaemonizeTahoeNode"])
self.options = self.twistd_options.subOptions
def test_node_type_nothing(self):
tmpdir = self.mktemp()
base = dirname(tmpdir).decode(getfilesystemencoding())
t = identify_node_type(base)
self.assertIs(None, t)
def test_node_type_introducer(self):
tmpdir = self.mktemp()
base = dirname(tmpdir).decode(getfilesystemencoding())
with open(join(dirname(tmpdir), 'introducer.tac'), 'w') as f:
f.write("test placeholder")
t = identify_node_type(base)
self.assertEqual(u"introducer", t)
def test_daemonize(self):
tmpdir = self.mktemp()
plug = DaemonizeTahoeNodePlugin('client', tmpdir)
with patch('twisted.internet.reactor') as r:
def call(fn, *args, **kw):
fn()
r.stop = lambda: None
r.callWhenRunning = call
service = plug.makeService(self.options)
service.parent = Mock()
service.startService()
self.assertTrue(service is not None)
def test_daemonize_no_keygen(self):
tmpdir = self.mktemp()
stderr = BytesIO()
plug = DaemonizeTahoeNodePlugin('key-generator', tmpdir)
with patch('twisted.internet.reactor') as r:
def call(fn, *args, **kw):
d = fn()
d.addErrback(lambda _: None) # ignore the error we'll trigger
r.callWhenRunning = call
service = plug.makeService(self.options)
service.stderr = stderr
service.parent = Mock()
# we'll raise ValueError because there's no key-generator
# .. BUT we do this in an async function called via
# "callWhenRunning" .. hence using a hook
d = service.set_hook('running')
service.startService()
def done(f):
self.assertIn(
"key-generator support removed",
stderr.getvalue(),
)
return None
d.addBoth(done)
return d
def test_daemonize_unknown_nodetype(self):
tmpdir = self.mktemp()
plug = DaemonizeTahoeNodePlugin('an-unknown-service', tmpdir)
with patch('twisted.internet.reactor') as r:
def call(fn, *args, **kw):
fn()
r.stop = lambda: None
r.callWhenRunning = call
service = plug.makeService(self.options)
service.parent = Mock()
with self.assertRaises(ValueError) as ctx:
service.startService()
self.assertIn(
"unknown nodetype",
str(ctx.exception)
)
def test_daemonize_options(self):
parent = runner.Options()
opts = DaemonizeOptions()
opts.parent = parent
opts.parseArgs()
# just gratuitous coverage, ensureing we don't blow up on
# these methods.
opts.getSynopsis()
opts.getUsage()
class RunDaemonizeTests(unittest.TestCase):
def setUp(self):
# no test should change our working directory
self._working = os.path.abspath('.')
d = super(RunDaemonizeTests, self).setUp()
self._reactor = patch('twisted.internet.reactor')
self._reactor.stop = lambda: None
self._twistd = patch('allmydata.scripts.run_common.twistd')
self.node_dir = self.mktemp()
os.mkdir(self.node_dir)
for cm in [self._reactor, self._twistd]:
cm.__enter__()
return d
def tearDown(self):
d = super(RunDaemonizeTests, self).tearDown()
for cm in [self._reactor, self._twistd]:
cm.__exit__(None, None, None)
# Note: if you raise an exception (e.g. via self.assertEqual
# or raise RuntimeError) it is apparently just ignored and the
# test passes anyway...
if self._working != os.path.abspath('.'):
print("WARNING: a test just changed the working dir; putting it back")
os.chdir(self._working)
return d
def _placeholder_nodetype(self, nodetype):
fname = join(self.node_dir, '{}.tac'.format(nodetype))
with open(fname, 'w') as f:
f.write("test placeholder")
def test_daemonize_defaults(self):
self._placeholder_nodetype('introducer')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't much around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'daemonize',
])
i, o, e = StringIO(), StringIO(), StringIO()
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
runner.dispatch(config, i, o, e)
self.assertEqual(0, exit_code[0])
def test_daemonize_wrong_nodetype(self):
self._placeholder_nodetype('invalid')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't much around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'daemonize',
])
i, o, e = StringIO(), StringIO(), StringIO()
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
runner.dispatch(config, i, o, e)
self.assertEqual(0, exit_code[0])
def test_daemonize_run(self):
self._placeholder_nodetype('client')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't much around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'daemonize',
])
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
from allmydata.scripts.tahoe_daemonize import daemonize
daemonize(config)

View File

@ -1,273 +0,0 @@
import os
import shutil
import subprocess
from os.path import join
from mock import patch
from six.moves import StringIO
from functools import partial
from twisted.trial import unittest
from allmydata.scripts import runner
#@patch('twisted.internet.reactor')
@patch('allmydata.scripts.tahoe_start.subprocess')
class RunStartTests(unittest.TestCase):
def setUp(self):
d = super(RunStartTests, self).setUp()
self.node_dir = self.mktemp()
os.mkdir(self.node_dir)
return d
def _placeholder_nodetype(self, nodetype):
fname = join(self.node_dir, '{}.tac'.format(nodetype))
with open(fname, 'w') as f:
f.write("test placeholder")
def _pid_file(self, pid):
fname = join(self.node_dir, 'twistd.pid')
with open(fname, 'w') as f:
f.write(u"{}\n".format(pid))
def _logs(self, logs):
os.mkdir(join(self.node_dir, 'logs'))
fname = join(self.node_dir, 'logs', 'twistd.log')
with open(fname, 'w') as f:
f.write(logs)
def test_start_defaults(self, _subprocess):
self._placeholder_nodetype('client')
self._pid_file(1234)
self._logs('one log\ntwo log\nred log\nblue log\n')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'start',
])
i, o, e = StringIO(), StringIO(), StringIO()
try:
with patch('allmydata.scripts.tahoe_start.os'):
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
def launch(*args, **kw):
with open(join(self.node_dir, 'logs', 'twistd.log'), 'a') as f:
f.write('client running\n') # "the magic"
_subprocess.check_call = launch
runner.dispatch(config, i, o, e)
except Exception:
pass
self.assertEqual([0], exit_code)
self.assertTrue('Node has started' in o.getvalue())
def test_start_fails(self, _subprocess):
self._placeholder_nodetype('client')
self._logs('existing log line\n')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'start',
])
i, o, e = StringIO(), StringIO(), StringIO()
with patch('allmydata.scripts.tahoe_start.time') as t:
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
thetime = [0]
def _time():
thetime[0] += 0.1
return thetime[0]
t.time = _time
def launch(*args, **kw):
with open(join(self.node_dir, 'logs', 'twistd.log'), 'a') as f:
f.write('a new log line\n')
_subprocess.check_call = launch
runner.dispatch(config, i, o, e)
# should print out the collected logs and an error-code
self.assertTrue("a new log line" in o.getvalue())
self.assertEqual([1], exit_code)
def test_start_subprocess_fails(self, _subprocess):
self._placeholder_nodetype('client')
self._logs('existing log line\n')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'start',
])
i, o, e = StringIO(), StringIO(), StringIO()
with patch('allmydata.scripts.tahoe_start.time'):
with patch('allmydata.scripts.runner.sys') as s:
# undo patch for the exception-class
_subprocess.CalledProcessError = subprocess.CalledProcessError
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
def launch(*args, **kw):
raise subprocess.CalledProcessError(42, "tahoe")
_subprocess.check_call = launch
runner.dispatch(config, i, o, e)
# should get our "odd" error-code
self.assertEqual([42], exit_code)
def test_start_help(self, _subprocess):
self._placeholder_nodetype('client')
std = StringIO()
with patch('sys.stdout') as stdo:
stdo.write = std.write
try:
runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'start',
'--help',
], stdout=std)
self.fail("Should get exit")
except SystemExit as e:
print(e)
self.assertIn(
"Usage:",
std.getvalue()
)
def test_start_unknown_node_type(self, _subprocess):
self._placeholder_nodetype('bogus')
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'start',
])
i, o, e = StringIO(), StringIO(), StringIO()
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
runner.dispatch(config, i, o, e)
# should print out the collected logs and an error-code
self.assertIn(
"is not a recognizable node directory",
e.getvalue()
)
self.assertEqual([1], exit_code)
def test_start_nodedir_not_dir(self, _subprocess):
shutil.rmtree(self.node_dir)
assert not os.path.isdir(self.node_dir)
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'start',
])
i, o, e = StringIO(), StringIO(), StringIO()
with patch('allmydata.scripts.runner.sys') as s:
exit_code = [None]
def _exit(code):
exit_code[0] = code
s.exit = _exit
runner.dispatch(config, i, o, e)
# should print out the collected logs and an error-code
self.assertIn(
"does not look like a directory at all",
e.getvalue()
)
self.assertEqual([1], exit_code)
class RunTests(unittest.TestCase):
"""
Tests confirming end-user behavior of CLI commands
"""
def setUp(self):
d = super(RunTests, self).setUp()
self.addCleanup(partial(os.chdir, os.getcwd()))
self.node_dir = self.mktemp()
os.mkdir(self.node_dir)
return d
@patch('twisted.internet.reactor')
def test_run_invalid_config(self, reactor):
"""
Configuration that's invalid should be obvious to the user
"""
def cwr(fn, *args, **kw):
fn()
def stop(*args, **kw):
stopped.append(None)
stopped = []
reactor.callWhenRunning = cwr
reactor.stop = stop
with open(os.path.join(self.node_dir, "client.tac"), "w") as f:
f.write('test')
with open(os.path.join(self.node_dir, "tahoe.cfg"), "w") as f:
f.write(
"[invalid section]\n"
"foo = bar\n"
)
config = runner.parse_or_exit_with_explanation([
# have to do this so the tests don't muck around in
# ~/.tahoe (the default)
'--node-directory', self.node_dir,
'run',
])
i, o, e = StringIO(), StringIO(), StringIO()
d = runner.dispatch(config, i, o, e)
self.assertFailure(d, SystemExit)
output = e.getvalue()
# should print out the collected logs and an error-code
self.assertIn(
"invalid section",
output,
)
self.assertIn(
"Configuration error:",
output,
)
# ensure reactor.stop was actually called
self.assertEqual([None], stopped)
return d

View File

@ -5,7 +5,6 @@ __all__ = [
"on_stdout",
"on_stdout_and_stderr",
"on_different",
"wait_for_exit",
]
import os
@ -14,8 +13,11 @@ from errno import ENOENT
import attr
from eliot import (
log_call,
)
from twisted.internet.error import (
ProcessDone,
ProcessTerminated,
ProcessExitedAlready,
)
@ -25,9 +27,6 @@ from twisted.internet.interfaces import (
from twisted.python.filepath import (
FilePath,
)
from twisted.python.runtime import (
platform,
)
from twisted.internet.protocol import (
Protocol,
ProcessProtocol,
@ -42,11 +41,9 @@ from twisted.internet.task import (
from ..client import (
_Client,
)
from ..scripts.tahoe_stop import (
COULD_NOT_STOP,
)
from ..util.eliotutil import (
inline_callbacks,
log_call_deferred,
)
class Expect(Protocol, object):
@ -156,6 +153,7 @@ class CLINodeAPI(object):
env=os.environ,
)
@log_call(action_type="test:cli-api:run", include_args=["extra_tahoe_args"])
def run(self, protocol, extra_tahoe_args=()):
"""
Start the node running.
@ -176,28 +174,21 @@ class CLINodeAPI(object):
if ENOENT != e.errno:
raise
def stop(self, protocol):
self._execute(
protocol,
[u"stop", self.basedir.asTextMode().path],
)
@log_call_deferred(action_type="test:cli-api:stop")
def stop(self):
return self.stop_and_wait()
@log_call_deferred(action_type="test:cli-api:stop-and-wait")
@inline_callbacks
def stop_and_wait(self):
if platform.isWindows():
# On Windows there is no PID file and no "tahoe stop".
if self.process is not None:
while True:
try:
self.process.signalProcess("TERM")
except ProcessExitedAlready:
break
else:
yield deferLater(self.reactor, 0.1, lambda: None)
else:
protocol, ended = wait_for_exit()
self.stop(protocol)
yield ended
if self.process is not None:
while True:
try:
self.process.signalProcess("TERM")
except ProcessExitedAlready:
break
else:
yield deferLater(self.reactor, 0.1, lambda: None)
def active(self):
# By writing this file, we get two minutes before the client will
@ -208,28 +199,9 @@ class CLINodeAPI(object):
def _check_cleanup_reason(self, reason):
# Let it fail because the process has already exited.
reason.trap(ProcessTerminated)
if reason.value.exitCode != COULD_NOT_STOP:
return reason
return None
def cleanup(self):
stopping = self.stop_and_wait()
stopping.addErrback(self._check_cleanup_reason)
return stopping
class _WaitForEnd(ProcessProtocol, object):
def __init__(self, ended):
self._ended = ended
def processEnded(self, reason):
if reason.check(ProcessDone):
self._ended.callback(None)
else:
self._ended.errback(reason)
def wait_for_exit():
ended = Deferred()
protocol = _WaitForEnd(ended)
return protocol, ended

View File

@ -1,5 +1,19 @@
"""Tests for the dirnode module."""
import six
"""Tests for the dirnode module.
Ported to Python 3.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from past.builtins import long
from future.utils import PY2
if PY2:
# Skip list() since it results in spurious test failures
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, object, range, str, max, min # noqa: F401
import time
import unicodedata
from zope.interface import implementer
@ -31,9 +45,6 @@ import allmydata.test.common_util as testutil
from hypothesis import given
from hypothesis.strategies import text
if six.PY3:
long = int
@implementer(IConsumer)
class MemAccum(object):
@ -48,16 +59,16 @@ class MemAccum(object):
self.data = data
self.producer.resumeProducing()
setup_py_uri = "URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861"
one_uri = "URI:LIT:n5xgk" # LIT for "one"
mut_write_uri = "URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
mdmf_write_uri = "URI:MDMF:x533rhbm6kiehzl5kj3s44n5ie:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
empty_litdir_uri = "URI:DIR2-LIT:"
tiny_litdir_uri = "URI:DIR2-LIT:gqytunj2onug64tufqzdcosvkjetutcjkq5gw4tvm5vwszdgnz5hgyzufqydulbshj5x2lbm" # contains one child which is itself also LIT
mut_read_uri = "URI:SSK-RO:jf6wkflosyvntwxqcdo7a54jvm:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
mdmf_read_uri = "URI:MDMF-RO:d4cydxselputycfzkw6qgz4zv4:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
future_write_uri = "x-tahoe-crazy://I_am_from_the_future."
future_read_uri = "x-tahoe-crazy-readonly://I_am_from_the_future."
setup_py_uri = b"URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861"
one_uri = b"URI:LIT:n5xgk" # LIT for "one"
mut_write_uri = b"URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
mdmf_write_uri = b"URI:MDMF:x533rhbm6kiehzl5kj3s44n5ie:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
empty_litdir_uri = b"URI:DIR2-LIT:"
tiny_litdir_uri = b"URI:DIR2-LIT:gqytunj2onug64tufqzdcosvkjetutcjkq5gw4tvm5vwszdgnz5hgyzufqydulbshj5x2lbm" # contains one child which is itself also LIT
mut_read_uri = b"URI:SSK-RO:jf6wkflosyvntwxqcdo7a54jvm:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
mdmf_read_uri = b"URI:MDMF-RO:d4cydxselputycfzkw6qgz4zv4:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
future_write_uri = b"x-tahoe-crazy://I_am_from_the_future."
future_read_uri = b"x-tahoe-crazy-readonly://I_am_from_the_future."
future_nonascii_write_uri = u"x-tahoe-even-more-crazy://I_am_from_the_future_rw_\u263A".encode('utf-8')
future_nonascii_read_uri = u"x-tahoe-even-more-crazy-readonly://I_am_from_the_future_ro_\u263A".encode('utf-8')
@ -95,13 +106,13 @@ class Dirnode(GridTestMixin, unittest.TestCase,
self.failUnless(u)
cap_formats = []
if mdmf:
cap_formats = ["URI:DIR2-MDMF:",
"URI:DIR2-MDMF-RO:",
"URI:DIR2-MDMF-Verifier:"]
cap_formats = [b"URI:DIR2-MDMF:",
b"URI:DIR2-MDMF-RO:",
b"URI:DIR2-MDMF-Verifier:"]
else:
cap_formats = ["URI:DIR2:",
"URI:DIR2-RO",
"URI:DIR2-Verifier:"]
cap_formats = [b"URI:DIR2:",
b"URI:DIR2-RO",
b"URI:DIR2-Verifier:"]
rw, ro, v = cap_formats
self.failUnless(u.startswith(rw), u)
u_ro = n.get_readonly_uri()
@ -149,7 +160,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
self.subdir = subdir
new_v = subdir.get_verify_cap().to_string()
assert isinstance(new_v, str)
assert isinstance(new_v, bytes)
self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
self.expected_verifycaps.add(new_v)
si = subdir.get_storage_index()
@ -182,7 +193,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
"largest-directory-children": 2,
"largest-immutable-file": 0,
}
for k,v in expected.iteritems():
for k,v in expected.items():
self.failUnlessReallyEqual(stats[k], v,
"stats[%s] was %s, not %s" %
(k, stats[k], v))
@ -272,8 +283,8 @@ class Dirnode(GridTestMixin, unittest.TestCase,
{ 'tahoe': {'linkcrtime': "bogus"}}))
d.addCallback(lambda res: n.get_metadata_for(u"c2"))
def _has_good_linkcrtime(metadata):
self.failUnless(metadata.has_key('tahoe'))
self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
self.failUnless('tahoe' in metadata)
self.failUnless('linkcrtime' in metadata['tahoe'])
self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
d.addCallback(_has_good_linkcrtime)
@ -423,7 +434,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
# moved on to stdlib "json" which doesn't have it either.
d.addCallback(self.stall, 0.1)
d.addCallback(lambda res: n.add_file(u"timestamps",
upload.Data("stamp me", convergence="some convergence string")))
upload.Data(b"stamp me", convergence=b"some convergence string")))
d.addCallback(self.stall, 0.1)
def _stop(res):
self._stop_timestamp = time.time()
@ -472,11 +483,11 @@ class Dirnode(GridTestMixin, unittest.TestCase,
self.failUnlessReallyEqual(set(children.keys()),
set([u"child"])))
uploadable1 = upload.Data("some data", convergence="converge")
uploadable1 = upload.Data(b"some data", convergence=b"converge")
d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
d.addCallback(lambda newnode:
self.failUnless(IImmutableFileNode.providedBy(newnode)))
uploadable2 = upload.Data("some data", convergence="stuff")
uploadable2 = upload.Data(b"some data", convergence=b"stuff")
d.addCallback(lambda res:
self.shouldFail(ExistingChildError, "add_file-no",
"child 'newfile' already exists",
@ -491,7 +502,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
d.addCallback(lambda metadata:
self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
uploadable3 = upload.Data("some data", convergence="converge")
uploadable3 = upload.Data(b"some data", convergence=b"converge")
d.addCallback(lambda res: n.add_file(u"newfile-metadata",
uploadable3,
{"key": "value"}))
@ -507,8 +518,8 @@ class Dirnode(GridTestMixin, unittest.TestCase,
def _created2(subdir2):
self.subdir2 = subdir2
# put something in the way, to make sure it gets overwritten
return subdir2.add_file(u"child", upload.Data("overwrite me",
"converge"))
return subdir2.add_file(u"child", upload.Data(b"overwrite me",
b"converge"))
d.addCallback(_created2)
d.addCallback(lambda res:
@ -666,22 +677,22 @@ class Dirnode(GridTestMixin, unittest.TestCase,
self.failUnless(fut_node.is_unknown())
self.failUnlessReallyEqual(fut_node.get_uri(), future_write_uri)
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), b"ro." + future_read_uri)
self.failUnless(isinstance(fut_metadata, dict), fut_metadata)
self.failUnless(futna_node.is_unknown())
self.failUnlessReallyEqual(futna_node.get_uri(), future_nonascii_write_uri)
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), "ro." + future_nonascii_read_uri)
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), b"ro." + future_nonascii_read_uri)
self.failUnless(isinstance(futna_metadata, dict), futna_metadata)
self.failUnless(fro_node.is_unknown())
self.failUnlessReallyEqual(fro_node.get_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fro_node.get_uri(), b"ro." + future_read_uri)
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), b"ro." + future_read_uri)
self.failUnless(isinstance(fro_metadata, dict), fro_metadata)
self.failUnless(frona_node.is_unknown())
self.failUnlessReallyEqual(frona_node.get_uri(), "ro." + future_nonascii_read_uri)
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), "ro." + future_nonascii_read_uri)
self.failUnlessReallyEqual(frona_node.get_uri(), b"ro." + future_nonascii_read_uri)
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), b"ro." + future_nonascii_read_uri)
self.failUnless(isinstance(frona_metadata, dict), frona_metadata)
self.failIf(emptylit_node.is_unknown())
@ -697,7 +708,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
set([u"short"])))
d2.addCallback(lambda ignored: tinylit_node.list())
d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, b"The end."))
return d2
d.addCallback(_check_kids)
@ -782,7 +793,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
rep = str(dn)
self.failUnless("RO-IMM" in rep)
cap = dn.get_cap()
self.failUnlessIn("CHK", cap.to_string())
self.failUnlessIn(b"CHK", cap.to_string())
self.cap = cap
return dn.list()
d.addCallback(_created)
@ -808,13 +819,13 @@ class Dirnode(GridTestMixin, unittest.TestCase,
self.failUnlessEqual(two_metadata["metakey"], "metavalue")
self.failUnless(fut_node.is_unknown())
self.failUnlessReallyEqual(fut_node.get_uri(), "imm." + future_read_uri)
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), "imm." + future_read_uri)
self.failUnlessReallyEqual(fut_node.get_uri(), b"imm." + future_read_uri)
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), b"imm." + future_read_uri)
self.failUnless(isinstance(fut_metadata, dict), fut_metadata)
self.failUnless(futna_node.is_unknown())
self.failUnlessReallyEqual(futna_node.get_uri(), "imm." + future_nonascii_read_uri)
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), "imm." + future_nonascii_read_uri)
self.failUnlessReallyEqual(futna_node.get_uri(), b"imm." + future_nonascii_read_uri)
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), b"imm." + future_nonascii_read_uri)
self.failUnless(isinstance(futna_metadata, dict), futna_metadata)
self.failIf(emptylit_node.is_unknown())
@ -830,7 +841,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
set([u"short"])))
d2.addCallback(lambda ignored: tinylit_node.list())
d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, b"The end."))
return d2
d.addCallback(_check_kids)
@ -894,8 +905,8 @@ class Dirnode(GridTestMixin, unittest.TestCase,
rep = str(dn)
self.failUnless("RO-IMM" in rep)
cap = dn.get_cap()
self.failUnlessIn("LIT", cap.to_string())
self.failUnlessReallyEqual(cap.to_string(), "URI:DIR2-LIT:")
self.failUnlessIn(b"LIT", cap.to_string())
self.failUnlessReallyEqual(cap.to_string(), b"URI:DIR2-LIT:")
self.cap = cap
return dn.list()
d.addCallback(_created_empty)
@ -912,13 +923,13 @@ class Dirnode(GridTestMixin, unittest.TestCase,
rep = str(dn)
self.failUnless("RO-IMM" in rep)
cap = dn.get_cap()
self.failUnlessIn("LIT", cap.to_string())
self.failUnlessIn(b"LIT", cap.to_string())
self.failUnlessReallyEqual(cap.to_string(),
"URI:DIR2-LIT:gi4tumj2n4wdcmz2kvjesosmjfkdu3rvpbtwwlbqhiwdeot3puwcy")
b"URI:DIR2-LIT:gi4tumj2n4wdcmz2kvjesosmjfkdu3rvpbtwwlbqhiwdeot3puwcy")
self.cap = cap
return dn.list()
d.addCallback(_created_small)
d.addCallback(lambda kids: self.failUnlessReallyEqual(kids.keys(), [u"o"]))
d.addCallback(lambda kids: self.failUnlessReallyEqual(list(kids.keys()), [u"o"]))
# now test n.create_subdirectory(mutable=False)
d.addCallback(lambda ign: c.create_dirnode())
@ -928,7 +939,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
d.addCallback(_check_kids)
d.addCallback(lambda ign: n.list())
d.addCallback(lambda children:
self.failUnlessReallyEqual(children.keys(), [u"subdir"]))
self.failUnlessReallyEqual(list(children.keys()), [u"subdir"]))
d.addCallback(lambda ign: n.get(u"subdir"))
d.addCallback(lambda sd: sd.list())
d.addCallback(_check_kids)
@ -962,14 +973,14 @@ class Dirnode(GridTestMixin, unittest.TestCase,
# It also tests that we store child names as UTF-8 NFC, and normalize
# them again when retrieving them.
stripped_write_uri = "lafs://from_the_future\t"
stripped_read_uri = "lafs://readonly_from_the_future\t"
spacedout_write_uri = stripped_write_uri + " "
spacedout_read_uri = stripped_read_uri + " "
stripped_write_uri = b"lafs://from_the_future\t"
stripped_read_uri = b"lafs://readonly_from_the_future\t"
spacedout_write_uri = stripped_write_uri + b" "
spacedout_read_uri = stripped_read_uri + b" "
child = nm.create_from_cap(spacedout_write_uri, spacedout_read_uri)
self.failUnlessReallyEqual(child.get_write_uri(), spacedout_write_uri)
self.failUnlessReallyEqual(child.get_readonly_uri(), "ro." + spacedout_read_uri)
self.failUnlessReallyEqual(child.get_readonly_uri(), b"ro." + spacedout_read_uri)
child_dottedi = u"ch\u0131\u0307ld"
@ -1003,7 +1014,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
self.failUnlessIn(name, kids_out)
(expected_child, ign) = kids_out[name]
self.failUnlessReallyEqual(rw_uri, expected_child.get_write_uri())
self.failUnlessReallyEqual("ro." + ro_uri, expected_child.get_readonly_uri())
self.failUnlessReallyEqual(b"ro." + ro_uri, expected_child.get_readonly_uri())
numkids += 1
self.failUnlessReallyEqual(numkids, len(kids_out))
@ -1039,7 +1050,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
child_node, child_metadata = children[u"child"]
self.failUnlessReallyEqual(child_node.get_write_uri(), stripped_write_uri)
self.failUnlessReallyEqual(child_node.get_readonly_uri(), "ro." + stripped_read_uri)
self.failUnlessReallyEqual(child_node.get_readonly_uri(), b"ro." + stripped_read_uri)
d.addCallback(_check_kids)
d.addCallback(lambda ign: nm.create_from_cap(self.cap.to_string()))
@ -1074,7 +1085,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
d.addCallback(_created_root)
def _created_subdir(subdir):
self._subdir = subdir
d = subdir.add_file(u"file1", upload.Data("data"*100, None))
d = subdir.add_file(u"file1", upload.Data(b"data"*100, None))
d.addCallback(lambda res: subdir.set_node(u"link", self._rootnode))
d.addCallback(lambda res: c.create_dirnode())
d.addCallback(lambda dn:
@ -1250,7 +1261,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
nm = c.nodemaker
filecap = make_chk_file_uri(1234)
filenode = nm.create_from_cap(filecap)
uploadable = upload.Data("some data", convergence="some convergence string")
uploadable = upload.Data(b"some data", convergence=b"some convergence string")
d = c.create_dirnode(version=version)
def _created(rw_dn):
@ -1386,7 +1397,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
class MinimalFakeMutableFile(object):
def get_writekey(self):
return "writekey"
return b"writekey"
class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
# This is a base32-encoded representation of the directory tree
@ -1405,7 +1416,7 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
nodemaker = NodeMaker(None, None, None,
None, None,
{"k": 3, "n": 10}, None, None)
write_uri = "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
write_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
filenode = nodemaker.create_from_cap(write_uri)
node = dirnode.DirectoryNode(filenode, nodemaker, None)
children = node._unpack_contents(known_tree)
@ -1417,13 +1428,13 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
def _check_children(self, children):
# Are all the expected child nodes there?
self.failUnless(children.has_key(u'file1'))
self.failUnless(children.has_key(u'file2'))
self.failUnless(children.has_key(u'file3'))
self.failUnless(u'file1' in children)
self.failUnless(u'file2' in children)
self.failUnless(u'file3' in children)
# Are the metadata for child 3 right?
file3_rocap = "URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
file3_rwcap = "URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
file3_rocap = b"URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
file3_rwcap = b"URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
file3_metadata = {'ctime': 1246663897.4336269, 'tahoe': {'linkmotime': 1246663897.4336269, 'linkcrtime': 1246663897.4336269}, 'mtime': 1246663897.4336269}
self.failUnlessEqual(file3_metadata, children[u'file3'][1])
self.failUnlessReallyEqual(file3_rocap,
@ -1432,8 +1443,8 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
children[u'file3'][0].get_uri())
# Are the metadata for child 2 right?
file2_rocap = "URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
file2_rwcap = "URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
file2_rocap = b"URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
file2_rwcap = b"URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
file2_metadata = {'ctime': 1246663897.430218, 'tahoe': {'linkmotime': 1246663897.430218, 'linkcrtime': 1246663897.430218}, 'mtime': 1246663897.430218}
self.failUnlessEqual(file2_metadata, children[u'file2'][1])
self.failUnlessReallyEqual(file2_rocap,
@ -1442,8 +1453,8 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
children[u'file2'][0].get_uri())
# Are the metadata for child 1 right?
file1_rocap = "URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
file1_rwcap = "URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
file1_rocap = b"URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
file1_rwcap = b"URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
file1_metadata = {'ctime': 1246663897.4275661, 'tahoe': {'linkmotime': 1246663897.4275661, 'linkcrtime': 1246663897.4275661}, 'mtime': 1246663897.4275661}
self.failUnlessEqual(file1_metadata, children[u'file1'][1])
self.failUnlessReallyEqual(file1_rocap,
@ -1452,18 +1463,42 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
children[u'file1'][0].get_uri())
def _make_kids(self, nm, which):
caps = {"imm": "URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861",
"lit": "URI:LIT:n5xgk", # LIT for "one"
"write": "URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq",
"read": "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q",
"dirwrite": "URI:DIR2:n6x24zd3seu725yluj75q5boaa:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
"dirread": "URI:DIR2-RO:b7sr5qsifnicca7cbk3rhrhbvq:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
caps = {"imm": b"URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861",
"lit": b"URI:LIT:n5xgk", # LIT for "one"
"write": b"URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq",
"read": b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q",
"dirwrite": b"URI:DIR2:n6x24zd3seu725yluj75q5boaa:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
"dirread": b"URI:DIR2-RO:b7sr5qsifnicca7cbk3rhrhbvq:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
}
kids = {}
for name in which:
kids[unicode(name)] = (nm.create_from_cap(caps[name]), {})
kids[str(name)] = (nm.create_from_cap(caps[name]), {})
return kids
def test_pack_unpack_unknown(self):
"""
Minimal testing for roundtripping unknown URIs.
"""
nm = NodeMaker(None, None, None, None, None, {"k": 3, "n": 10}, None, None)
fn = MinimalFakeMutableFile()
# UnknownNode has massively complex rules about when it's an error.
# Just force it not to be an error.
unknown_rw = UnknownNode(b"whatevs://write", None)
unknown_rw.error = None
unknown_ro = UnknownNode(None, b"whatevs://readonly")
unknown_ro.error = None
kids = {
"unknown_rw": (unknown_rw, {}),
"unknown_ro": (unknown_ro, {})
}
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=False)
write_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
filenode = nm.create_from_cap(write_uri)
dn = dirnode.DirectoryNode(filenode, nm, None)
unkids = dn._unpack_contents(packed)
self.assertEqual(kids, unkids)
@given(text(min_size=1, max_size=20))
def test_pack_unpack_unicode_hypothesis(self, name):
"""
@ -1485,7 +1520,7 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
name: (LiteralFileNode(uri.from_string(one_uri)), {}),
}
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=False)
write_uri = "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
write_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
filenode = nm.create_from_cap(write_uri)
dn = dirnode.DirectoryNode(filenode, nm, None)
unkids = dn._unpack_contents(packed)
@ -1498,11 +1533,11 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
kids = self._make_kids(nm, ["imm", "lit", "write", "read",
"dirwrite", "dirread"])
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=False)
self.failUnlessIn("lit", packed)
self.failUnlessIn(b"lit", packed)
kids = self._make_kids(nm, ["imm", "lit"])
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=True)
self.failUnlessIn("lit", packed)
self.failUnlessIn(b"lit", packed)
kids = self._make_kids(nm, ["imm", "lit", "write"])
self.failUnlessRaises(dirnode.MustBeDeepImmutableError,
@ -1528,22 +1563,22 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
@implementer(IMutableFileNode)
class FakeMutableFile(object):
counter = 0
def __init__(self, initial_contents=""):
def __init__(self, initial_contents=b""):
data = self._get_initial_contents(initial_contents)
self.data = data.read(data.get_size())
self.data = "".join(self.data)
self.data = b"".join(self.data)
counter = FakeMutableFile.counter
FakeMutableFile.counter += 1
writekey = hashutil.ssk_writekey_hash(str(counter))
fingerprint = hashutil.ssk_pubkey_fingerprint_hash(str(counter))
writekey = hashutil.ssk_writekey_hash(b"%d" % counter)
fingerprint = hashutil.ssk_pubkey_fingerprint_hash(b"%d" % counter)
self.uri = uri.WriteableSSKFileURI(writekey, fingerprint)
def _get_initial_contents(self, contents):
if isinstance(contents, str):
if isinstance(contents, bytes):
return contents
if contents is None:
return ""
return b""
assert callable(contents), "%s should be callable, not %s" % \
(contents, type(contents))
return contents(self)
@ -1561,7 +1596,7 @@ class FakeMutableFile(object):
return defer.succeed(self.data)
def get_writekey(self):
return "writekey"
return b"writekey"
def is_readonly(self):
return False
@ -1584,7 +1619,7 @@ class FakeMutableFile(object):
return defer.succeed(None)
class FakeNodeMaker(NodeMaker):
def create_mutable_file(self, contents="", keysize=None, version=None):
def create_mutable_file(self, contents=b"", keysize=None, version=None):
return defer.succeed(FakeMutableFile(contents))
class FakeClient2(_Client):
@ -1631,9 +1666,9 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
# and to add an URI prefixed with "ro." or "imm." when it is given in a
# write slot (or URL parameter).
d.addCallback(lambda ign: self._node.set_uri(u"add-ro",
"ro." + future_read_uri, None))
b"ro." + future_read_uri, None))
d.addCallback(lambda ign: self._node.set_uri(u"add-imm",
"imm." + future_imm_uri, None))
b"imm." + future_imm_uri, None))
d.addCallback(lambda ign: self._node.list())
def _check(children):
@ -1642,25 +1677,25 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
self.failUnless(isinstance(fn, UnknownNode), fn)
self.failUnlessReallyEqual(fn.get_uri(), future_write_uri)
self.failUnlessReallyEqual(fn.get_write_uri(), future_write_uri)
self.failUnlessReallyEqual(fn.get_readonly_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fn.get_readonly_uri(), b"ro." + future_read_uri)
(fn2, metadata2) = children[u"add-pair"]
self.failUnless(isinstance(fn2, UnknownNode), fn2)
self.failUnlessReallyEqual(fn2.get_uri(), future_write_uri)
self.failUnlessReallyEqual(fn2.get_write_uri(), future_write_uri)
self.failUnlessReallyEqual(fn2.get_readonly_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fn2.get_readonly_uri(), b"ro." + future_read_uri)
(fn3, metadata3) = children[u"add-ro"]
self.failUnless(isinstance(fn3, UnknownNode), fn3)
self.failUnlessReallyEqual(fn3.get_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fn3.get_uri(), b"ro." + future_read_uri)
self.failUnlessReallyEqual(fn3.get_write_uri(), None)
self.failUnlessReallyEqual(fn3.get_readonly_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fn3.get_readonly_uri(), b"ro." + future_read_uri)
(fn4, metadata4) = children[u"add-imm"]
self.failUnless(isinstance(fn4, UnknownNode), fn4)
self.failUnlessReallyEqual(fn4.get_uri(), "imm." + future_imm_uri)
self.failUnlessReallyEqual(fn4.get_uri(), b"imm." + future_imm_uri)
self.failUnlessReallyEqual(fn4.get_write_uri(), None)
self.failUnlessReallyEqual(fn4.get_readonly_uri(), "imm." + future_imm_uri)
self.failUnlessReallyEqual(fn4.get_readonly_uri(), b"imm." + future_imm_uri)
# We should also be allowed to copy the "future" UnknownNode, because
# it contains all the information that was in the original directory
@ -1675,17 +1710,17 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
self.failUnless(isinstance(fn, UnknownNode), fn)
self.failUnlessReallyEqual(fn.get_uri(), future_write_uri)
self.failUnlessReallyEqual(fn.get_write_uri(), future_write_uri)
self.failUnlessReallyEqual(fn.get_readonly_uri(), "ro." + future_read_uri)
self.failUnlessReallyEqual(fn.get_readonly_uri(), b"ro." + future_read_uri)
d.addCallback(_check2)
return d
def test_unknown_strip_prefix_for_ro(self):
self.failUnlessReallyEqual(strip_prefix_for_ro("foo", False), "foo")
self.failUnlessReallyEqual(strip_prefix_for_ro("ro.foo", False), "foo")
self.failUnlessReallyEqual(strip_prefix_for_ro("imm.foo", False), "imm.foo")
self.failUnlessReallyEqual(strip_prefix_for_ro("foo", True), "foo")
self.failUnlessReallyEqual(strip_prefix_for_ro("ro.foo", True), "foo")
self.failUnlessReallyEqual(strip_prefix_for_ro("imm.foo", True), "foo")
self.failUnlessReallyEqual(strip_prefix_for_ro(b"foo", False), b"foo")
self.failUnlessReallyEqual(strip_prefix_for_ro(b"ro.foo", False), b"foo")
self.failUnlessReallyEqual(strip_prefix_for_ro(b"imm.foo", False), b"imm.foo")
self.failUnlessReallyEqual(strip_prefix_for_ro(b"foo", True), b"foo")
self.failUnlessReallyEqual(strip_prefix_for_ro(b"ro.foo", True), b"foo")
self.failUnlessReallyEqual(strip_prefix_for_ro(b"imm.foo", True), b"foo")
def test_unknownnode(self):
lit_uri = one_uri
@ -1697,58 +1732,58 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
]
unknown_rw = [# These are errors because we're only given a rw_uri, and we can't
# diminish it.
( 2, UnknownNode("foo", None)),
( 3, UnknownNode("foo", None, deep_immutable=True)),
( 4, UnknownNode("ro.foo", None, deep_immutable=True)),
( 5, UnknownNode("ro." + mut_read_uri, None, deep_immutable=True)),
( 5.1, UnknownNode("ro." + mdmf_read_uri, None, deep_immutable=True)),
( 6, UnknownNode("URI:SSK-RO:foo", None, deep_immutable=True)),
( 7, UnknownNode("URI:SSK:foo", None)),
( 2, UnknownNode(b"foo", None)),
( 3, UnknownNode(b"foo", None, deep_immutable=True)),
( 4, UnknownNode(b"ro.foo", None, deep_immutable=True)),
( 5, UnknownNode(b"ro." + mut_read_uri, None, deep_immutable=True)),
( 5.1, UnknownNode(b"ro." + mdmf_read_uri, None, deep_immutable=True)),
( 6, UnknownNode(b"URI:SSK-RO:foo", None, deep_immutable=True)),
( 7, UnknownNode(b"URI:SSK:foo", None)),
]
must_be_ro = [# These are errors because a readonly constraint is not met.
( 8, UnknownNode("ro." + mut_write_uri, None)),
( 8.1, UnknownNode("ro." + mdmf_write_uri, None)),
( 9, UnknownNode(None, "ro." + mut_write_uri)),
( 9.1, UnknownNode(None, "ro." + mdmf_write_uri)),
( 8, UnknownNode(b"ro." + mut_write_uri, None)),
( 8.1, UnknownNode(b"ro." + mdmf_write_uri, None)),
( 9, UnknownNode(None, b"ro." + mut_write_uri)),
( 9.1, UnknownNode(None, b"ro." + mdmf_write_uri)),
]
must_be_imm = [# These are errors because an immutable constraint is not met.
(10, UnknownNode(None, "ro.URI:SSK-RO:foo", deep_immutable=True)),
(11, UnknownNode(None, "imm.URI:SSK:foo")),
(12, UnknownNode(None, "imm.URI:SSK-RO:foo")),
(13, UnknownNode("bar", "ro.foo", deep_immutable=True)),
(14, UnknownNode("bar", "imm.foo", deep_immutable=True)),
(15, UnknownNode("bar", "imm." + lit_uri, deep_immutable=True)),
(16, UnknownNode("imm." + mut_write_uri, None)),
(16.1, UnknownNode("imm." + mdmf_write_uri, None)),
(17, UnknownNode("imm." + mut_read_uri, None)),
(17.1, UnknownNode("imm." + mdmf_read_uri, None)),
(18, UnknownNode("bar", "imm.foo")),
(10, UnknownNode(None, b"ro.URI:SSK-RO:foo", deep_immutable=True)),
(11, UnknownNode(None, b"imm.URI:SSK:foo")),
(12, UnknownNode(None, b"imm.URI:SSK-RO:foo")),
(13, UnknownNode(b"bar", b"ro.foo", deep_immutable=True)),
(14, UnknownNode(b"bar", b"imm.foo", deep_immutable=True)),
(15, UnknownNode(b"bar", b"imm." + lit_uri, deep_immutable=True)),
(16, UnknownNode(b"imm." + mut_write_uri, None)),
(16.1, UnknownNode(b"imm." + mdmf_write_uri, None)),
(17, UnknownNode(b"imm." + mut_read_uri, None)),
(17.1, UnknownNode(b"imm." + mdmf_read_uri, None)),
(18, UnknownNode(b"bar", b"imm.foo")),
]
bad_uri = [# These are errors because the URI is bad once we've stripped the prefix.
(19, UnknownNode("ro.URI:SSK-RO:foo", None)),
(20, UnknownNode("imm.URI:CHK:foo", None, deep_immutable=True)),
(21, UnknownNode(None, "URI:CHK:foo")),
(22, UnknownNode(None, "URI:CHK:foo", deep_immutable=True)),
(19, UnknownNode(b"ro.URI:SSK-RO:foo", None)),
(20, UnknownNode(b"imm.URI:CHK:foo", None, deep_immutable=True)),
(21, UnknownNode(None, b"URI:CHK:foo")),
(22, UnknownNode(None, b"URI:CHK:foo", deep_immutable=True)),
]
ro_prefixed = [# These are valid, and the readcap should end up with a ro. prefix.
(23, UnknownNode(None, "foo")),
(24, UnknownNode(None, "ro.foo")),
(25, UnknownNode(None, "ro." + lit_uri)),
(26, UnknownNode("bar", "foo")),
(27, UnknownNode("bar", "ro.foo")),
(28, UnknownNode("bar", "ro." + lit_uri)),
(29, UnknownNode("ro.foo", None)),
(30, UnknownNode("ro." + lit_uri, None)),
(23, UnknownNode(None, b"foo")),
(24, UnknownNode(None, b"ro.foo")),
(25, UnknownNode(None, b"ro." + lit_uri)),
(26, UnknownNode(b"bar", b"foo")),
(27, UnknownNode(b"bar", b"ro.foo")),
(28, UnknownNode(b"bar", b"ro." + lit_uri)),
(29, UnknownNode(b"ro.foo", None)),
(30, UnknownNode(b"ro." + lit_uri, None)),
]
imm_prefixed = [# These are valid, and the readcap should end up with an imm. prefix.
(31, UnknownNode(None, "foo", deep_immutable=True)),
(32, UnknownNode(None, "ro.foo", deep_immutable=True)),
(33, UnknownNode(None, "imm.foo")),
(34, UnknownNode(None, "imm.foo", deep_immutable=True)),
(35, UnknownNode("imm." + lit_uri, None)),
(36, UnknownNode("imm." + lit_uri, None, deep_immutable=True)),
(37, UnknownNode(None, "imm." + lit_uri)),
(38, UnknownNode(None, "imm." + lit_uri, deep_immutable=True)),
(31, UnknownNode(None, b"foo", deep_immutable=True)),
(32, UnknownNode(None, b"ro.foo", deep_immutable=True)),
(33, UnknownNode(None, b"imm.foo")),
(34, UnknownNode(None, b"imm.foo", deep_immutable=True)),
(35, UnknownNode(b"imm." + lit_uri, None)),
(36, UnknownNode(b"imm." + lit_uri, None, deep_immutable=True)),
(37, UnknownNode(None, b"imm." + lit_uri)),
(38, UnknownNode(None, b"imm." + lit_uri, deep_immutable=True)),
]
error = unknown_rw + must_be_ro + must_be_imm + bad_uri
ok = ro_prefixed + imm_prefixed
@ -1780,10 +1815,10 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
self.failIf(n.get_readonly_uri() is None, i)
for (i, n) in ro_prefixed:
self.failUnless(n.get_readonly_uri().startswith("ro."), i)
self.failUnless(n.get_readonly_uri().startswith(b"ro."), i)
for (i, n) in imm_prefixed:
self.failUnless(n.get_readonly_uri().startswith("imm."), i)
self.failUnless(n.get_readonly_uri().startswith(b"imm."), i)
@ -1867,7 +1902,7 @@ class Deleter(GridTestMixin, testutil.ReallyEqualMixin, unittest.TestCase):
self.set_up_grid(oneshare=True)
c0 = self.g.clients[0]
d = c0.create_dirnode()
small = upload.Data("Small enough for a LIT", None)
small = upload.Data(b"Small enough for a LIT", None)
def _created_dir(dn):
self.root = dn
self.root_uri = dn.get_uri()
@ -1909,10 +1944,10 @@ class Adder(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
# root/file1
# root/file2
# root/dir1
d = root_node.add_file(u'file1', upload.Data("Important Things",
d = root_node.add_file(u'file1', upload.Data(b"Important Things",
None))
d.addCallback(lambda res:
root_node.add_file(u'file2', upload.Data("Sekrit Codes", None)))
root_node.add_file(u'file2', upload.Data(b"Sekrit Codes", None)))
d.addCallback(lambda res:
root_node.create_subdirectory(u"dir1"))
d.addCallback(lambda res: root_node)

View File

@ -34,6 +34,7 @@ from ._twisted_9607 import (
)
from ..util.eliotutil import (
inline_callbacks,
log_call_deferred,
)
def get_root_from_file(src):
@ -54,6 +55,7 @@ rootdir = get_root_from_file(srcfile)
class RunBinTahoeMixin(object):
@log_call_deferred(action_type="run-bin-tahoe")
def run_bintahoe(self, args, stdin=None, python_options=[], env=None):
command = sys.executable
argv = python_options + ["-m", "allmydata.scripts.runner"] + args
@ -142,8 +144,8 @@ class BinTahoe(common_util.SignalMixin, unittest.TestCase, RunBinTahoeMixin):
class CreateNode(unittest.TestCase):
# exercise "tahoe create-node", create-introducer, and
# create-key-generator by calling the corresponding code as a subroutine.
# exercise "tahoe create-node" and "tahoe create-introducer" by calling
# the corresponding code as a subroutine.
def workdir(self, name):
basedir = os.path.join("test_runner", "CreateNode", name)
@ -251,16 +253,11 @@ class CreateNode(unittest.TestCase):
class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
RunBinTahoeMixin):
"""
exercise "tahoe run" for both introducer, client node, and key-generator,
by spawning "tahoe run" (or "tahoe start") as a subprocess. This doesn't
get us line-level coverage, but it does a better job of confirming that
the user can actually run "./bin/tahoe run" and expect it to work. This
verifies that bin/tahoe sets up PYTHONPATH and the like correctly.
This doesn't work on cygwin (it hangs forever), so we skip this test
when we're on cygwin. It is likely that "tahoe start" itself doesn't
work on cygwin: twisted seems unable to provide a version of
spawnProcess which really works there.
exercise "tahoe run" for both introducer and client node, by spawning
"tahoe run" as a subprocess. This doesn't get us line-level coverage, but
it does a better job of confirming that the user can actually run
"./bin/tahoe run" and expect it to work. This verifies that bin/tahoe sets
up PYTHONPATH and the like correctly.
"""
def workdir(self, name):
@ -340,7 +337,7 @@ class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
@inline_callbacks
def test_client(self):
"""
Test many things.
Test too many things.
0) Verify that "tahoe create-node" takes a --webport option and writes
the value to the configuration file.
@ -348,9 +345,9 @@ class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
1) Verify that "tahoe run" writes a pid file and a node url file (on POSIX).
2) Verify that the storage furl file has a stable value across a
"tahoe run" / "tahoe stop" / "tahoe run" sequence.
"tahoe run" / stop / "tahoe run" sequence.
3) Verify that the pid file is removed after "tahoe stop" succeeds (on POSIX).
3) Verify that the pid file is removed after SIGTERM (on POSIX).
"""
basedir = self.workdir("test_client")
c1 = os.path.join(basedir, "c1")
@ -454,18 +451,6 @@ class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
"does not look like a directory at all"
)
def test_stop_bad_directory(self):
"""
If ``tahoe run`` is pointed at a directory where no node is running, it
reports an error and exits.
"""
return self._bad_directory_test(
u"test_stop_bad_directory",
"tahoe stop",
lambda tahoe, p: tahoe.stop(p),
"does not look like a running node directory",
)
@inline_callbacks
def _bad_directory_test(self, workdir, description, operation, expected_message):
"""

View File

@ -31,8 +31,8 @@ class UnknownNode(object):
def __init__(self, given_rw_uri, given_ro_uri, deep_immutable=False,
name=u"<unknown name>"):
assert given_rw_uri is None or isinstance(given_rw_uri, str)
assert given_ro_uri is None or isinstance(given_ro_uri, str)
assert given_rw_uri is None or isinstance(given_rw_uri, bytes)
assert given_ro_uri is None or isinstance(given_ro_uri, bytes)
given_rw_uri = given_rw_uri or None
given_ro_uri = given_ro_uri or None
@ -182,3 +182,11 @@ class UnknownNode(object):
def check_and_repair(self, monitor, verify, add_lease):
return defer.succeed(None)
def __eq__(self, other):
if not isinstance(other, UnknownNode):
return False
return other.ro_uri == self.ro_uri and other.rw_uri == self.rw_uri
def __ne__(self, other):
return not (self == other)

View File

@ -34,6 +34,7 @@ PORTED_MODULES = [
"allmydata.crypto.error",
"allmydata.crypto.rsa",
"allmydata.crypto.util",
"allmydata.dirnode",
"allmydata.hashtree",
"allmydata.immutable.checker",
"allmydata.immutable.downloader",
@ -67,6 +68,7 @@ PORTED_MODULES = [
"allmydata.mutable.retrieve",
"allmydata.mutable.servermap",
"allmydata.node",
"allmydata.nodemaker",
"allmydata.storage_client",
"allmydata.storage.common",
"allmydata.storage.crawler",
@ -136,6 +138,7 @@ PORTED_TEST_MODULES = [
"allmydata.test.test_crypto",
"allmydata.test.test_deferredutil",
"allmydata.test.test_dictutil",
"allmydata.test.test_dirnode",
"allmydata.test.test_download",
"allmydata.test.test_encode",
"allmydata.test.test_encodingutil",