mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-04-07 19:04:21 +00:00
Merge branch 'master' into 2916.grid-manager-proposal.5
This commit is contained in:
commit
613a6f80aa
14
.codecov.yml
14
.codecov.yml
@ -32,3 +32,17 @@ coverage:
|
||||
patch:
|
||||
default:
|
||||
threshold: 1%
|
||||
|
||||
|
||||
codecov:
|
||||
# This is a public repository so supposedly we don't "need" to use an upload
|
||||
# token. However, using one makes sure that CI jobs running against forked
|
||||
# repositories have coverage uploaded to the right place in codecov so
|
||||
# their reports aren't incomplete.
|
||||
token: "abf679b6-e2e6-4b33-b7b5-6cfbd41ee691"
|
||||
|
||||
notify:
|
||||
# The reference documentation suggests that this is the default setting:
|
||||
# https://docs.codecov.io/docs/codecovyml-reference#codecovnotifywait_for_ci
|
||||
# However observation suggests otherwise.
|
||||
wait_for_ci: true
|
||||
|
@ -273,7 +273,7 @@ Then, do the following:
|
||||
[connections]
|
||||
tcp = tor
|
||||
|
||||
* Launch the Tahoe server with ``tahoe start $NODEDIR``
|
||||
* Launch the Tahoe server with ``tahoe run $NODEDIR``
|
||||
|
||||
The ``tub.port`` section will cause the Tahoe server to listen on PORT, but
|
||||
bind the listening socket to the loopback interface, which is not reachable
|
||||
@ -435,4 +435,3 @@ It is therefore important that your I2P router is sharing bandwidth with other
|
||||
routers, so that you can give back as you use I2P. This will never impair the
|
||||
performance of your Tahoe-LAFS node, because your I2P router will always
|
||||
prioritize your own traffic.
|
||||
|
||||
|
@ -365,7 +365,7 @@ set the ``tub.location`` option described below.
|
||||
also generally reduced when operating in private mode.
|
||||
|
||||
When False, any of the following configuration problems will cause
|
||||
``tahoe start`` to throw a PrivacyError instead of starting the node:
|
||||
``tahoe run`` to throw a PrivacyError instead of starting the node:
|
||||
|
||||
* ``[node] tub.location`` contains any ``tcp:`` hints
|
||||
|
||||
|
@ -85,7 +85,7 @@ Node Management
|
||||
|
||||
"``tahoe create-node [NODEDIR]``" is the basic make-a-new-node
|
||||
command. It creates a new directory and populates it with files that
|
||||
will allow the "``tahoe start``" and related commands to use it later
|
||||
will allow the "``tahoe run``" and related commands to use it later
|
||||
on. ``tahoe create-node`` creates nodes that have client functionality
|
||||
(upload/download files), web API services (controlled by the
|
||||
'[node]web.port' configuration), and storage services (unless
|
||||
@ -94,8 +94,7 @@ on. ``tahoe create-node`` creates nodes that have client functionality
|
||||
NODEDIR defaults to ``~/.tahoe/`` , and newly-created nodes default to
|
||||
publishing a web server on port 3456 (limited to the loopback interface, at
|
||||
127.0.0.1, to restrict access to other programs on the same host). All of the
|
||||
other "``tahoe``" subcommands use corresponding defaults (with the exception
|
||||
that "``tahoe run``" defaults to running a node in the current directory).
|
||||
other "``tahoe``" subcommands use corresponding defaults.
|
||||
|
||||
"``tahoe create-client [NODEDIR]``" creates a node with no storage service.
|
||||
That is, it behaves like "``tahoe create-node --no-storage [NODEDIR]``".
|
||||
@ -117,25 +116,6 @@ the same way on all platforms and logs to stdout. If you want to run
|
||||
the process as a daemon, it is recommended that you use your favourite
|
||||
daemonization tool.
|
||||
|
||||
The now-deprecated "``tahoe start [NODEDIR]``" command will launch a
|
||||
previously-created node. It will launch the node into the background
|
||||
using ``tahoe daemonize`` (and internal-only command, not for user
|
||||
use). On some platforms (including Windows) this command is unable to
|
||||
run a daemon in the background; in that case it behaves in the same
|
||||
way as "``tahoe run``". ``tahoe start`` also monitors the logs for up
|
||||
to 5 seconds looking for either a succesful startup message or for
|
||||
early failure messages and produces an appropriate exit code. You are
|
||||
encouraged to use ``tahoe run`` along with your favourite
|
||||
daemonization tool instead of this. ``tahoe start`` is maintained for
|
||||
backwards compatibility of users already using it; new scripts should
|
||||
depend on ``tahoe run``.
|
||||
|
||||
"``tahoe stop [NODEDIR]``" will shut down a running node. "``tahoe
|
||||
restart [NODEDIR]``" will stop and then restart a running
|
||||
node. Similar to above, you should use ``tahoe run`` instead alongside
|
||||
your favourite daemonization tool.
|
||||
|
||||
|
||||
File Store Manipulation
|
||||
=======================
|
||||
|
||||
|
@ -2145,7 +2145,7 @@ you could do the following::
|
||||
tahoe debug dump-cap URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
|
||||
-> storage index: whpepioyrnff7orecjolvbudeu
|
||||
echo "whpepioyrnff7orecjolvbudeu my puppy told me to" >>$NODEDIR/access.blacklist
|
||||
tahoe restart $NODEDIR
|
||||
# ... restart the node to re-read configuration ...
|
||||
tahoe get URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861
|
||||
-> error, 403 Access Prohibited: my puppy told me to
|
||||
|
||||
|
@ -128,10 +128,9 @@ provided in ``misc/incident-gatherer/support_classifiers.py`` . There is
|
||||
roughly one category for each ``log.WEIRD``-or-higher level event in the
|
||||
Tahoe source code.
|
||||
|
||||
The incident gatherer is created with the "``flogtool
|
||||
create-incident-gatherer WORKDIR``" command, and started with "``tahoe
|
||||
start``". The generated "``gatherer.tac``" file should be modified to add
|
||||
classifier functions.
|
||||
The incident gatherer is created with the "``flogtool create-incident-gatherer
|
||||
WORKDIR``" command, and started with "``tahoe run``". The generated
|
||||
"``gatherer.tac``" file should be modified to add classifier functions.
|
||||
|
||||
The incident gatherer writes incident names (which are simply the relative
|
||||
pathname of the ``incident-\*.flog.bz2`` file) into ``classified/CATEGORY``.
|
||||
@ -175,7 +174,7 @@ things that happened on multiple machines (such as comparing a client node
|
||||
making a request with the storage servers that respond to that request).
|
||||
|
||||
Create the Log Gatherer with the "``flogtool create-gatherer WORKDIR``"
|
||||
command, and start it with "``tahoe start``". Then copy the contents of the
|
||||
command, and start it with "``twistd -ny gatherer.tac``". Then copy the contents of the
|
||||
``log_gatherer.furl`` file it creates into the ``BASEDIR/tahoe.cfg`` file
|
||||
(under the key ``log_gatherer.furl`` of the section ``[node]``) of all nodes
|
||||
that should be sending it log events. (See :doc:`configuration`)
|
||||
|
@ -81,9 +81,7 @@ does not offer its disk space to other nodes. To configure other behavior,
|
||||
use “``tahoe create-node``” or see :doc:`configuration`.
|
||||
|
||||
The “``tahoe run``” command above will run the node in the foreground.
|
||||
On Unix, you can run it in the background instead by using the
|
||||
“``tahoe start``” command. To stop a node started in this way, use
|
||||
“``tahoe stop``”. ``tahoe --help`` gives a summary of all commands.
|
||||
``tahoe --help`` gives a summary of all commands.
|
||||
|
||||
|
||||
Running a Server or Introducer
|
||||
@ -99,12 +97,10 @@ and ``--location`` arguments.
|
||||
To construct an introducer, create a new base directory for it (the name
|
||||
of the directory is up to you), ``cd`` into it, and run “``tahoe
|
||||
create-introducer --hostname=example.net .``” (but using the hostname of
|
||||
your VPS). Now run the introducer using “``tahoe start .``”. After it
|
||||
your VPS). Now run the introducer using “``tahoe run .``”. After it
|
||||
starts, it will write a file named ``introducer.furl`` into the
|
||||
``private/`` subdirectory of that base directory. This file contains the
|
||||
URL the other nodes must use in order to connect to this introducer.
|
||||
(Note that “``tahoe run .``” doesn't work for introducers, this is a
|
||||
known issue: `#937`_.)
|
||||
|
||||
You can distribute your Introducer fURL securely to new clients by using
|
||||
the ``tahoe invite`` command. This will prepare some JSON to send to the
|
||||
|
@ -201,9 +201,8 @@ log_gatherer.furl = {log_furl}
|
||||
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
|
||||
f.write(config)
|
||||
|
||||
# on windows, "tahoe start" means: run forever in the foreground,
|
||||
# but on linux it means daemonize. "tahoe run" is consistent
|
||||
# between platforms.
|
||||
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
|
||||
# "start" command.
|
||||
protocol = _MagicTextProtocol('introducer running')
|
||||
transport = _tahoe_runner_optional_coverage(
|
||||
protocol,
|
||||
@ -278,9 +277,8 @@ log_gatherer.furl = {log_furl}
|
||||
with open(join(intro_dir, 'tahoe.cfg'), 'w') as f:
|
||||
f.write(config)
|
||||
|
||||
# on windows, "tahoe start" means: run forever in the foreground,
|
||||
# but on linux it means daemonize. "tahoe run" is consistent
|
||||
# between platforms.
|
||||
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
|
||||
# "start" command.
|
||||
protocol = _MagicTextProtocol('introducer running')
|
||||
transport = _tahoe_runner_optional_coverage(
|
||||
protocol,
|
||||
|
@ -189,10 +189,8 @@ def _run_node(reactor, node_dir, request, magic_text):
|
||||
magic_text = "client running"
|
||||
protocol = _MagicTextProtocol(magic_text)
|
||||
|
||||
# on windows, "tahoe start" means: run forever in the foreground,
|
||||
# but on linux it means daemonize. "tahoe run" is consistent
|
||||
# between platforms.
|
||||
|
||||
# "tahoe run" is consistent across Linux/macOS/Windows, unlike the old
|
||||
# "start" command.
|
||||
transport = _tahoe_runner_optional_coverage(
|
||||
protocol,
|
||||
reactor,
|
||||
|
0
newsfragments/3384.minor
Normal file
0
newsfragments/3384.minor
Normal file
0
newsfragments/3523.minor
Normal file
0
newsfragments/3523.minor
Normal file
0
newsfragments/3524.minor
Normal file
0
newsfragments/3524.minor
Normal file
0
newsfragments/3529.minor
Normal file
0
newsfragments/3529.minor
Normal file
0
newsfragments/3532.minor
Normal file
0
newsfragments/3532.minor
Normal file
0
newsfragments/3533.minor
Normal file
0
newsfragments/3533.minor
Normal file
0
newsfragments/3534.minor
Normal file
0
newsfragments/3534.minor
Normal file
1
newsfragments/3550.removed
Normal file
1
newsfragments/3550.removed
Normal file
@ -0,0 +1 @@
|
||||
The deprecated ``tahoe`` start, restart, stop, and daemonize sub-commands have been removed.
|
0
newsfragments/3552.minor
Normal file
0
newsfragments/3552.minor
Normal file
0
newsfragments/3553.minor
Normal file
0
newsfragments/3553.minor
Normal file
0
newsfragments/3557.minor
Normal file
0
newsfragments/3557.minor
Normal file
0
newsfragments/3558.minor
Normal file
0
newsfragments/3558.minor
Normal file
0
newsfragments/3560.minor
Normal file
0
newsfragments/3560.minor
Normal file
0
newsfragments/3564.minor
Normal file
0
newsfragments/3564.minor
Normal file
0
newsfragments/3565.minor
Normal file
0
newsfragments/3565.minor
Normal file
0
newsfragments/3566.minor
Normal file
0
newsfragments/3566.minor
Normal file
0
newsfragments/3567.minor
Normal file
0
newsfragments/3567.minor
Normal file
0
newsfragments/3568.minor
Normal file
0
newsfragments/3568.minor
Normal file
0
newsfragments/3572.minor
Normal file
0
newsfragments/3572.minor
Normal file
0
newsfragments/3575.minor
Normal file
0
newsfragments/3575.minor
Normal file
0
newsfragments/3578.minor
Normal file
0
newsfragments/3578.minor
Normal file
@ -23,21 +23,12 @@ python.pkgs.buildPythonPackage rec {
|
||||
# This list is over-zealous because it's more work to disable individual
|
||||
# tests with in a module.
|
||||
|
||||
# test_system is a lot of integration-style tests that do a lot of real
|
||||
# networking between many processes. They sometimes fail spuriously.
|
||||
rm src/allmydata/test/test_system.py
|
||||
|
||||
# Many of these tests don't properly skip when i2p or tor dependencies are
|
||||
# not supplied (and we are not supplying them).
|
||||
rm src/allmydata/test/test_i2p_provider.py
|
||||
rm src/allmydata/test/test_connections.py
|
||||
rm src/allmydata/test/cli/test_create.py
|
||||
rm src/allmydata/test/test_client.py
|
||||
rm src/allmydata/test/test_runner.py
|
||||
|
||||
# Some eliot code changes behavior based on whether stdout is a tty or not
|
||||
# and fails when it is not.
|
||||
rm src/allmydata/test/test_eliotutil.py
|
||||
'';
|
||||
|
||||
|
||||
|
9
setup.py
9
setup.py
@ -111,7 +111,9 @@ install_requires = [
|
||||
|
||||
# Eliot is contemplating dropping Python 2 support. Stick to a version we
|
||||
# know works on Python 2.7.
|
||||
"eliot ~= 1.7",
|
||||
"eliot ~= 1.7 ; python_version < '3.0'",
|
||||
# On Python 3, we want a new enough version to support custom JSON encoders.
|
||||
"eliot >= 1.13.0 ; python_version > '3.0'",
|
||||
|
||||
# Pyrsistent 0.17.0 (which we use by way of Eliot) has dropped
|
||||
# Python 2 entirely; stick to the version known to work for us.
|
||||
@ -383,10 +385,7 @@ setup(name="tahoe-lafs", # also set in __init__.py
|
||||
# this version from time to time, but we will do it
|
||||
# intentionally.
|
||||
"pyflakes == 2.2.0",
|
||||
# coverage 5.0 breaks the integration tests in some opaque way.
|
||||
# This probably needs to be addressed in a more permanent way
|
||||
# eventually...
|
||||
"coverage ~= 4.5",
|
||||
"coverage ~= 5.0",
|
||||
"mock",
|
||||
"tox",
|
||||
"pytest",
|
||||
|
@ -34,10 +34,10 @@ class Blacklist(object):
|
||||
try:
|
||||
if self.last_mtime is None or current_mtime > self.last_mtime:
|
||||
self.entries.clear()
|
||||
with open(self.blacklist_fn, "r") as f:
|
||||
with open(self.blacklist_fn, "rb") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#"):
|
||||
if not line or line.startswith(b"#"):
|
||||
continue
|
||||
si_s, reason = line.split(None, 1)
|
||||
si = base32.a2b(si_s) # must be valid base32
|
||||
|
@ -277,7 +277,7 @@ def create_client_from_config(config, _client_factory=None, _introducer_factory=
|
||||
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
handlers = node.create_connection_handlers(reactor, config, i2p_provider, tor_provider)
|
||||
handlers = node.create_connection_handlers(config, i2p_provider, tor_provider)
|
||||
default_connection_handlers, foolscap_connection_handlers = handlers
|
||||
tub_options = node.create_tub_options(config)
|
||||
|
||||
@ -722,7 +722,7 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
def get_long_nodeid(self):
|
||||
# this matches what IServer.get_longname() says about us elsewhere
|
||||
vk_string = ed25519.string_from_verifying_key(self._node_public_key)
|
||||
return remove_prefix(vk_string, "pub-")
|
||||
return remove_prefix(vk_string, b"pub-")
|
||||
|
||||
def get_long_tubid(self):
|
||||
return idlib.nodeid_b2a(self.nodeid)
|
||||
@ -918,10 +918,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
if helper_furl in ("None", ""):
|
||||
helper_furl = None
|
||||
|
||||
# FURLs need to be bytes:
|
||||
if helper_furl is not None:
|
||||
helper_furl = helper_furl.encode("utf-8")
|
||||
|
||||
DEP = self.encoding_params
|
||||
DEP["k"] = int(self.config.get_config("client", "shares.needed", DEP["k"]))
|
||||
DEP["n"] = int(self.config.get_config("client", "shares.total", DEP["n"]))
|
||||
|
@ -1,4 +1,16 @@
|
||||
"""Directory Node implementation."""
|
||||
"""Directory Node implementation.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
# Skip dict so it doesn't break things.
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import unicode
|
||||
|
||||
import time
|
||||
@ -6,7 +18,6 @@ import time
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from foolscap.api import fireEventually
|
||||
import json
|
||||
|
||||
from allmydata.crypto import aes
|
||||
from allmydata.deep_stats import DeepStats
|
||||
@ -19,7 +30,7 @@ from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \
|
||||
from allmydata.check_results import DeepCheckResults, \
|
||||
DeepCheckAndRepairResults
|
||||
from allmydata.monitor import Monitor
|
||||
from allmydata.util import hashutil, base32, log
|
||||
from allmydata.util import hashutil, base32, log, jsonbytes as json
|
||||
from allmydata.util.encodingutil import quote_output, normalize
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.util.netstring import netstring, split_netstring
|
||||
@ -37,6 +48,8 @@ from eliot.twisted import (
|
||||
|
||||
NAME = Field.for_types(
|
||||
u"name",
|
||||
# Make sure this works on Python 2; with str, it gets Future str which
|
||||
# breaks Eliot.
|
||||
[unicode],
|
||||
u"The name linking the parent to this node.",
|
||||
)
|
||||
@ -179,7 +192,7 @@ class Adder(object):
|
||||
def modify(self, old_contents, servermap, first_time):
|
||||
children = self.node._unpack_contents(old_contents)
|
||||
now = time.time()
|
||||
for (namex, (child, new_metadata)) in self.entries.iteritems():
|
||||
for (namex, (child, new_metadata)) in list(self.entries.items()):
|
||||
name = normalize(namex)
|
||||
precondition(IFilesystemNode.providedBy(child), child)
|
||||
|
||||
@ -205,8 +218,8 @@ class Adder(object):
|
||||
return new_contents
|
||||
|
||||
def _encrypt_rw_uri(writekey, rw_uri):
|
||||
precondition(isinstance(rw_uri, str), rw_uri)
|
||||
precondition(isinstance(writekey, str), writekey)
|
||||
precondition(isinstance(rw_uri, bytes), rw_uri)
|
||||
precondition(isinstance(writekey, bytes), writekey)
|
||||
|
||||
salt = hashutil.mutable_rwcap_salt_hash(rw_uri)
|
||||
key = hashutil.mutable_rwcap_key_hash(salt, writekey)
|
||||
@ -221,7 +234,7 @@ def _encrypt_rw_uri(writekey, rw_uri):
|
||||
def pack_children(childrenx, writekey, deep_immutable=False):
|
||||
# initial_children must have metadata (i.e. {} instead of None)
|
||||
children = {}
|
||||
for (namex, (node, metadata)) in childrenx.iteritems():
|
||||
for (namex, (node, metadata)) in list(childrenx.items()):
|
||||
precondition(isinstance(metadata, dict),
|
||||
"directory creation requires metadata to be a dict, not None", metadata)
|
||||
children[normalize(namex)] = (node, metadata)
|
||||
@ -245,18 +258,19 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
|
||||
If deep_immutable is True, I will require that all my children are deeply
|
||||
immutable, and will raise a MustBeDeepImmutableError if not.
|
||||
"""
|
||||
precondition((writekey is None) or isinstance(writekey, str), writekey)
|
||||
precondition((writekey is None) or isinstance(writekey, bytes), writekey)
|
||||
|
||||
has_aux = isinstance(children, AuxValueDict)
|
||||
entries = []
|
||||
for name in sorted(children.keys()):
|
||||
assert isinstance(name, unicode)
|
||||
assert isinstance(name, str)
|
||||
entry = None
|
||||
(child, metadata) = children[name]
|
||||
child.raise_error()
|
||||
if deep_immutable and not child.is_allowed_in_immutable_directory():
|
||||
raise MustBeDeepImmutableError("child %s is not allowed in an immutable directory" %
|
||||
quote_output(name, encoding='utf-8'), name)
|
||||
raise MustBeDeepImmutableError(
|
||||
"child %r is not allowed in an immutable directory" % (name,),
|
||||
name)
|
||||
if has_aux:
|
||||
entry = children.get_aux(name)
|
||||
if not entry:
|
||||
@ -264,26 +278,26 @@ def _pack_normalized_children(children, writekey, deep_immutable=False):
|
||||
assert isinstance(metadata, dict)
|
||||
rw_uri = child.get_write_uri()
|
||||
if rw_uri is None:
|
||||
rw_uri = ""
|
||||
assert isinstance(rw_uri, str), rw_uri
|
||||
rw_uri = b""
|
||||
assert isinstance(rw_uri, bytes), rw_uri
|
||||
|
||||
# should be prevented by MustBeDeepImmutableError check above
|
||||
assert not (rw_uri and deep_immutable)
|
||||
|
||||
ro_uri = child.get_readonly_uri()
|
||||
if ro_uri is None:
|
||||
ro_uri = ""
|
||||
assert isinstance(ro_uri, str), ro_uri
|
||||
ro_uri = b""
|
||||
assert isinstance(ro_uri, bytes), ro_uri
|
||||
if writekey is not None:
|
||||
writecap = netstring(_encrypt_rw_uri(writekey, rw_uri))
|
||||
else:
|
||||
writecap = ZERO_LEN_NETSTR
|
||||
entry = "".join([netstring(name.encode("utf-8")),
|
||||
entry = b"".join([netstring(name.encode("utf-8")),
|
||||
netstring(strip_prefix_for_ro(ro_uri, deep_immutable)),
|
||||
writecap,
|
||||
netstring(json.dumps(metadata))])
|
||||
netstring(json.dumps(metadata).encode("utf-8"))])
|
||||
entries.append(netstring(entry))
|
||||
return "".join(entries)
|
||||
return b"".join(entries)
|
||||
|
||||
@implementer(IDirectoryNode, ICheckable, IDeepCheckable)
|
||||
class DirectoryNode(object):
|
||||
@ -352,9 +366,9 @@ class DirectoryNode(object):
|
||||
# cleartext. The 'name' is UTF-8 encoded, and should be normalized to NFC.
|
||||
# The rwcapdata is formatted as:
|
||||
# pack("16ss32s", iv, AES(H(writekey+iv), plaintext_rw_uri), mac)
|
||||
assert isinstance(data, str), (repr(data), type(data))
|
||||
assert isinstance(data, bytes), (repr(data), type(data))
|
||||
# an empty directory is serialized as an empty string
|
||||
if data == "":
|
||||
if data == b"":
|
||||
return AuxValueDict()
|
||||
writeable = not self.is_readonly()
|
||||
mutable = self.is_mutable()
|
||||
@ -373,7 +387,7 @@ class DirectoryNode(object):
|
||||
# Therefore we normalize names going both in and out of directories.
|
||||
name = normalize(namex_utf8.decode("utf-8"))
|
||||
|
||||
rw_uri = ""
|
||||
rw_uri = b""
|
||||
if writeable:
|
||||
rw_uri = self._decrypt_rwcapdata(rwcapdata)
|
||||
|
||||
@ -384,8 +398,8 @@ class DirectoryNode(object):
|
||||
# ro_uri is treated in the same way for consistency.
|
||||
# rw_uri and ro_uri will be either None or a non-empty string.
|
||||
|
||||
rw_uri = rw_uri.rstrip(' ') or None
|
||||
ro_uri = ro_uri.rstrip(' ') or None
|
||||
rw_uri = rw_uri.rstrip(b' ') or None
|
||||
ro_uri = ro_uri.rstrip(b' ') or None
|
||||
|
||||
try:
|
||||
child = self._create_and_validate_node(rw_uri, ro_uri, name)
|
||||
@ -468,7 +482,7 @@ class DirectoryNode(object):
|
||||
exists a child of the given name, False if not."""
|
||||
name = normalize(namex)
|
||||
d = self._read()
|
||||
d.addCallback(lambda children: children.has_key(name))
|
||||
d.addCallback(lambda children: name in children)
|
||||
return d
|
||||
|
||||
def _get(self, children, name):
|
||||
@ -543,7 +557,7 @@ class DirectoryNode(object):
|
||||
else:
|
||||
pathx = pathx.split("/")
|
||||
for p in pathx:
|
||||
assert isinstance(p, unicode), p
|
||||
assert isinstance(p, str), p
|
||||
childnamex = pathx[0]
|
||||
remaining_pathx = pathx[1:]
|
||||
if remaining_pathx:
|
||||
@ -555,8 +569,8 @@ class DirectoryNode(object):
|
||||
return d
|
||||
|
||||
def set_uri(self, namex, writecap, readcap, metadata=None, overwrite=True):
|
||||
precondition(isinstance(writecap, (str,type(None))), writecap)
|
||||
precondition(isinstance(readcap, (str,type(None))), readcap)
|
||||
precondition(isinstance(writecap, (bytes, type(None))), writecap)
|
||||
precondition(isinstance(readcap, (bytes, type(None))), readcap)
|
||||
|
||||
# We now allow packing unknown nodes, provided they are valid
|
||||
# for this type of directory.
|
||||
@ -569,16 +583,16 @@ class DirectoryNode(object):
|
||||
# this takes URIs
|
||||
a = Adder(self, overwrite=overwrite,
|
||||
create_readonly_node=self._create_readonly_node)
|
||||
for (namex, e) in entries.iteritems():
|
||||
assert isinstance(namex, unicode), namex
|
||||
for (namex, e) in entries.items():
|
||||
assert isinstance(namex, str), namex
|
||||
if len(e) == 2:
|
||||
writecap, readcap = e
|
||||
metadata = None
|
||||
else:
|
||||
assert len(e) == 3
|
||||
writecap, readcap, metadata = e
|
||||
precondition(isinstance(writecap, (str,type(None))), writecap)
|
||||
precondition(isinstance(readcap, (str,type(None))), readcap)
|
||||
precondition(isinstance(writecap, (bytes,type(None))), writecap)
|
||||
precondition(isinstance(readcap, (bytes,type(None))), readcap)
|
||||
|
||||
# We now allow packing unknown nodes, provided they are valid
|
||||
# for this type of directory.
|
||||
@ -779,7 +793,7 @@ class DirectoryNode(object):
|
||||
# in the nodecache) seem to consume about 2000 bytes.
|
||||
dirkids = []
|
||||
filekids = []
|
||||
for name, (child, metadata) in sorted(children.iteritems()):
|
||||
for name, (child, metadata) in sorted(children.items()):
|
||||
childpath = path + [name]
|
||||
if isinstance(child, UnknownNode):
|
||||
walker.add_node(child, childpath)
|
||||
|
@ -9,6 +9,7 @@ from __future__ import unicode_literals
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from six import ensure_str
|
||||
|
||||
import time
|
||||
now = time.time
|
||||
@ -98,7 +99,7 @@ class ShareFinder(object):
|
||||
|
||||
# internal methods
|
||||
def loop(self):
|
||||
pending_s = ",".join([rt.server.get_name()
|
||||
pending_s = ",".join([ensure_str(rt.server.get_name())
|
||||
for rt in self.pending_requests]) # sort?
|
||||
self.log(format="ShareFinder loop: running=%(running)s"
|
||||
" hungry=%(hungry)s, pending=%(pending)s",
|
||||
|
@ -255,11 +255,11 @@ class Encoder(object):
|
||||
# captures the slot, not the value
|
||||
#d.addCallback(lambda res: self.do_segment(i))
|
||||
# use this form instead:
|
||||
d.addCallback(lambda res, i=i: self._encode_segment(i))
|
||||
d.addCallback(lambda res, i=i: self._encode_segment(i, is_tail=False))
|
||||
d.addCallback(self._send_segment, i)
|
||||
d.addCallback(self._turn_barrier)
|
||||
last_segnum = self.num_segments - 1
|
||||
d.addCallback(lambda res: self._encode_tail_segment(last_segnum))
|
||||
d.addCallback(lambda res: self._encode_segment(last_segnum, is_tail=True))
|
||||
d.addCallback(self._send_segment, last_segnum)
|
||||
d.addCallback(self._turn_barrier)
|
||||
|
||||
@ -317,8 +317,24 @@ class Encoder(object):
|
||||
dl.append(d)
|
||||
return self._gather_responses(dl)
|
||||
|
||||
def _encode_segment(self, segnum):
|
||||
codec = self._codec
|
||||
def _encode_segment(self, segnum, is_tail):
|
||||
"""
|
||||
Encode one segment of input into the configured number of shares.
|
||||
|
||||
:param segnum: Ostensibly, the number of the segment to encode. In
|
||||
reality, this parameter is ignored and the *next* segment is
|
||||
encoded and returned.
|
||||
|
||||
:param bool is_tail: ``True`` if this is the last segment, ``False``
|
||||
otherwise.
|
||||
|
||||
:return: A ``Deferred`` which fires with a two-tuple. The first
|
||||
element is a list of string-y objects representing the encoded
|
||||
segment data for one of the shares. The second element is a list
|
||||
of integers giving the share numbers of the shares in the first
|
||||
element.
|
||||
"""
|
||||
codec = self._tail_codec if is_tail else self._codec
|
||||
start = time.time()
|
||||
|
||||
# the ICodecEncoder API wants to receive a total of self.segment_size
|
||||
@ -350,9 +366,11 @@ class Encoder(object):
|
||||
# footprint to 430KiB at the expense of more hash-tree overhead.
|
||||
|
||||
d = self._gather_data(self.required_shares, input_piece_size,
|
||||
crypttext_segment_hasher)
|
||||
crypttext_segment_hasher, allow_short=is_tail)
|
||||
def _done_gathering(chunks):
|
||||
for c in chunks:
|
||||
# If is_tail then a short trailing chunk will have been padded
|
||||
# by _gather_data
|
||||
assert len(c) == input_piece_size
|
||||
self._crypttext_hashes.append(crypttext_segment_hasher.digest())
|
||||
# during this call, we hit 5*segsize memory
|
||||
@ -365,31 +383,6 @@ class Encoder(object):
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def _encode_tail_segment(self, segnum):
|
||||
|
||||
start = time.time()
|
||||
codec = self._tail_codec
|
||||
input_piece_size = codec.get_block_size()
|
||||
|
||||
crypttext_segment_hasher = hashutil.crypttext_segment_hasher()
|
||||
|
||||
d = self._gather_data(self.required_shares, input_piece_size,
|
||||
crypttext_segment_hasher, allow_short=True)
|
||||
def _done_gathering(chunks):
|
||||
for c in chunks:
|
||||
# a short trailing chunk will have been padded by
|
||||
# _gather_data
|
||||
assert len(c) == input_piece_size
|
||||
self._crypttext_hashes.append(crypttext_segment_hasher.digest())
|
||||
return codec.encode(chunks)
|
||||
d.addCallback(_done_gathering)
|
||||
def _done(res):
|
||||
elapsed = time.time() - start
|
||||
self._times["cumulative_encoding"] += elapsed
|
||||
return res
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def _gather_data(self, num_chunks, input_chunk_size,
|
||||
crypttext_segment_hasher,
|
||||
allow_short=False):
|
||||
|
@ -11,6 +11,7 @@ from future.utils import PY2, native_str
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import long, unicode
|
||||
from six import ensure_str
|
||||
|
||||
import os, time, weakref, itertools
|
||||
from zope.interface import implementer
|
||||
@ -1825,7 +1826,7 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
|
||||
def startService(self):
|
||||
service.MultiService.startService(self)
|
||||
if self._helper_furl:
|
||||
self.parent.tub.connectTo(self._helper_furl,
|
||||
self.parent.tub.connectTo(ensure_str(self._helper_furl),
|
||||
self._got_helper)
|
||||
|
||||
def _got_helper(self, helper):
|
||||
|
@ -3145,3 +3145,24 @@ class IAnnounceableStorageServer(Interface):
|
||||
:type: ``IReferenceable`` provider
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
class IAddressFamily(Interface):
|
||||
"""
|
||||
Support for one specific address family.
|
||||
|
||||
This stretches the definition of address family to include things like Tor
|
||||
and I2P.
|
||||
"""
|
||||
def get_listener():
|
||||
"""
|
||||
Return a string endpoint description or an ``IStreamServerEndpoint``.
|
||||
|
||||
This would be named ``get_server_endpoint`` if not for historical
|
||||
reasons.
|
||||
"""
|
||||
|
||||
def get_client_endpoint():
|
||||
"""
|
||||
Return an ``IStreamClientEndpoint``.
|
||||
"""
|
||||
|
@ -11,12 +11,12 @@ if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import long
|
||||
|
||||
from six import ensure_text
|
||||
from six import ensure_text, ensure_str
|
||||
|
||||
import time
|
||||
from zope.interface import implementer
|
||||
from twisted.application import service
|
||||
from foolscap.api import Referenceable, eventually
|
||||
from foolscap.api import Referenceable
|
||||
from allmydata.interfaces import InsufficientVersionError
|
||||
from allmydata.introducer.interfaces import IIntroducerClient, \
|
||||
RIIntroducerSubscriberClient_v2
|
||||
@ -24,6 +24,9 @@ from allmydata.introducer.common import sign_to_foolscap, unsign_from_foolscap,\
|
||||
get_tubid_string_from_ann
|
||||
from allmydata.util import log, yamlutil, connection_status
|
||||
from allmydata.util.rrefutil import add_version_to_remote_reference
|
||||
from allmydata.util.observer import (
|
||||
ObserverList,
|
||||
)
|
||||
from allmydata.crypto.error import BadSignature
|
||||
from allmydata.util.assertutil import precondition
|
||||
|
||||
@ -39,8 +42,6 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
nickname, my_version, oldest_supported,
|
||||
sequencer, cache_filepath):
|
||||
self._tub = tub
|
||||
if isinstance(introducer_furl, str):
|
||||
introducer_furl = introducer_furl.encode("utf-8")
|
||||
self.introducer_furl = introducer_furl
|
||||
|
||||
assert isinstance(nickname, str)
|
||||
@ -64,8 +65,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._publisher = None
|
||||
self._since = None
|
||||
|
||||
self._local_subscribers = [] # (servicename,cb,args,kwargs) tuples
|
||||
self._subscribed_service_names = set()
|
||||
self._local_subscribers = {} # {servicename: ObserverList}
|
||||
self._subscriptions = set() # requests we've actually sent
|
||||
|
||||
# _inbound_announcements remembers one announcement per
|
||||
@ -96,7 +96,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
def startService(self):
|
||||
service.Service.startService(self)
|
||||
self._introducer_error = None
|
||||
rc = self._tub.connectTo(self.introducer_furl, self._got_introducer)
|
||||
rc = self._tub.connectTo(ensure_str(self.introducer_furl), self._got_introducer)
|
||||
self._introducer_reconnector = rc
|
||||
def connect_failed(failure):
|
||||
self.log("Initial Introducer connection failed: perhaps it's down",
|
||||
@ -179,21 +179,21 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
return log.msg(*args, **kwargs)
|
||||
|
||||
def subscribe_to(self, service_name, cb, *args, **kwargs):
|
||||
self._local_subscribers.append( (service_name,cb,args,kwargs) )
|
||||
self._subscribed_service_names.add(service_name)
|
||||
obs = self._local_subscribers.setdefault(service_name, ObserverList())
|
||||
obs.subscribe(lambda key_s, ann: cb(key_s, ann, *args, **kwargs))
|
||||
self._maybe_subscribe()
|
||||
for index,(ann,key_s,when) in list(self._inbound_announcements.items()):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
servicename = index[0]
|
||||
if servicename == service_name:
|
||||
eventually(cb, key_s, ann, *args, **kwargs)
|
||||
obs.notify(key_s, ann)
|
||||
|
||||
def _maybe_subscribe(self):
|
||||
if not self._publisher:
|
||||
self.log("want to subscribe, but no introducer yet",
|
||||
level=log.NOISY)
|
||||
return
|
||||
for service_name in self._subscribed_service_names:
|
||||
for service_name in self._local_subscribers:
|
||||
if service_name in self._subscriptions:
|
||||
continue
|
||||
self._subscriptions.add(service_name)
|
||||
@ -272,7 +272,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
self._debug_counts["inbound_announcement"] += 1
|
||||
service_name = str(ann["service-name"])
|
||||
if service_name not in self._subscribed_service_names:
|
||||
if service_name not in self._local_subscribers:
|
||||
self.log("announcement for a service we don't care about [%s]"
|
||||
% (service_name,), level=log.UNUSUAL, umid="dIpGNA")
|
||||
self._debug_counts["wrong_service"] += 1
|
||||
@ -343,9 +343,9 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
def _deliver_announcements(self, key_s, ann):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
service_name = str(ann["service-name"])
|
||||
for (service_name2,cb,args,kwargs) in self._local_subscribers:
|
||||
if service_name2 == service_name:
|
||||
eventually(cb, key_s, ann, *args, **kwargs)
|
||||
obs = self._local_subscribers.get(service_name)
|
||||
if obs is not None:
|
||||
obs.notify(key_s, ann)
|
||||
|
||||
def connection_status(self):
|
||||
assert self.running # startService builds _introducer_reconnector
|
||||
|
@ -70,7 +70,7 @@ def create_introducer(basedir=u"."):
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
|
||||
default_connection_handlers, foolscap_connection_handlers = create_connection_handlers(reactor, config, i2p_provider, tor_provider)
|
||||
default_connection_handlers, foolscap_connection_handlers = create_connection_handlers(config, i2p_provider, tor_provider)
|
||||
tub_options = create_tub_options(config)
|
||||
|
||||
# we don't remember these because the Introducer doesn't make
|
||||
|
@ -671,28 +671,20 @@ def _make_tcp_handler():
|
||||
return default()
|
||||
|
||||
|
||||
def create_connection_handlers(reactor, config, i2p_provider, tor_provider):
|
||||
def create_default_connection_handlers(config, handlers):
|
||||
"""
|
||||
:returns: 2-tuple of default_connection_handlers, foolscap_connection_handlers
|
||||
:return: A dictionary giving the default connection handlers. The keys
|
||||
are strings like "tcp" and the values are strings like "tor" or
|
||||
``None``.
|
||||
"""
|
||||
reveal_ip = config.get_config("node", "reveal-IP-address", True, boolean=True)
|
||||
|
||||
# We store handlers for everything. None means we were unable to
|
||||
# create that handler, so hints which want it will be ignored.
|
||||
handlers = foolscap_connection_handlers = {
|
||||
"tcp": _make_tcp_handler(),
|
||||
"tor": tor_provider.get_tor_handler(),
|
||||
"i2p": i2p_provider.get_i2p_handler(),
|
||||
}
|
||||
log.msg(
|
||||
format="built Foolscap connection handlers for: %(known_handlers)s",
|
||||
known_handlers=sorted([k for k,v in handlers.items() if v]),
|
||||
facility="tahoe.node",
|
||||
umid="PuLh8g",
|
||||
)
|
||||
|
||||
# then we remember the default mappings from tahoe.cfg
|
||||
default_connection_handlers = {"tor": "tor", "i2p": "i2p"}
|
||||
# Remember the default mappings from tahoe.cfg
|
||||
default_connection_handlers = {
|
||||
name: name
|
||||
for name
|
||||
in handlers
|
||||
}
|
||||
tcp_handler_name = config.get_config("connections", "tcp", "tcp").lower()
|
||||
if tcp_handler_name == "disabled":
|
||||
default_connection_handlers["tcp"] = None
|
||||
@ -717,10 +709,35 @@ def create_connection_handlers(reactor, config, i2p_provider, tor_provider):
|
||||
|
||||
if not reveal_ip:
|
||||
if default_connection_handlers.get("tcp") == "tcp":
|
||||
raise PrivacyError("tcp = tcp, must be set to 'tor' or 'disabled'")
|
||||
return default_connection_handlers, foolscap_connection_handlers
|
||||
raise PrivacyError(
|
||||
"Privacy requested with `reveal-IP-address = false` "
|
||||
"but `tcp = tcp` conflicts with this.",
|
||||
)
|
||||
return default_connection_handlers
|
||||
|
||||
|
||||
def create_connection_handlers(config, i2p_provider, tor_provider):
|
||||
"""
|
||||
:returns: 2-tuple of default_connection_handlers, foolscap_connection_handlers
|
||||
"""
|
||||
# We store handlers for everything. None means we were unable to
|
||||
# create that handler, so hints which want it will be ignored.
|
||||
handlers = {
|
||||
"tcp": _make_tcp_handler(),
|
||||
"tor": tor_provider.get_client_endpoint(),
|
||||
"i2p": i2p_provider.get_client_endpoint(),
|
||||
}
|
||||
log.msg(
|
||||
format="built Foolscap connection handlers for: %(known_handlers)s",
|
||||
known_handlers=sorted([k for k,v in handlers.items() if v]),
|
||||
facility="tahoe.node",
|
||||
umid="PuLh8g",
|
||||
)
|
||||
return create_default_connection_handlers(
|
||||
config,
|
||||
handlers,
|
||||
), handlers
|
||||
|
||||
|
||||
def create_tub(tub_options, default_connection_handlers, foolscap_connection_handlers,
|
||||
handler_overrides={}, **kwargs):
|
||||
@ -760,8 +777,21 @@ def _convert_tub_port(s):
|
||||
return us
|
||||
|
||||
|
||||
def _tub_portlocation(config):
|
||||
class PortAssignmentRequired(Exception):
|
||||
"""
|
||||
A Tub port number was configured to be 0 where this is not allowed.
|
||||
"""
|
||||
|
||||
|
||||
def _tub_portlocation(config, get_local_addresses_sync, allocate_tcp_port):
|
||||
"""
|
||||
Figure out the network location of the main tub for some configuration.
|
||||
|
||||
:param get_local_addresses_sync: A function like
|
||||
``iputil.get_local_addresses_sync``.
|
||||
|
||||
:param allocate_tcp_port: A function like ``iputil.allocate_tcp_port``.
|
||||
|
||||
:returns: None or tuple of (port, location) for the main tub based
|
||||
on the given configuration. May raise ValueError or PrivacyError
|
||||
if there are problems with the config
|
||||
@ -801,7 +831,7 @@ def _tub_portlocation(config):
|
||||
file_tubport = fileutil.read(config.portnum_fname).strip()
|
||||
tubport = _convert_tub_port(file_tubport)
|
||||
else:
|
||||
tubport = "tcp:%d" % iputil.allocate_tcp_port()
|
||||
tubport = "tcp:%d" % (allocate_tcp_port(),)
|
||||
fileutil.write_atomically(config.portnum_fname, tubport + "\n",
|
||||
mode="")
|
||||
else:
|
||||
@ -809,7 +839,7 @@ def _tub_portlocation(config):
|
||||
|
||||
for port in tubport.split(","):
|
||||
if port in ("0", "tcp:0"):
|
||||
raise ValueError("tub.port cannot be 0: you must choose")
|
||||
raise PortAssignmentRequired()
|
||||
|
||||
if cfg_location is None:
|
||||
cfg_location = "AUTO"
|
||||
@ -821,7 +851,7 @@ def _tub_portlocation(config):
|
||||
if "AUTO" in split_location:
|
||||
if not reveal_ip:
|
||||
raise PrivacyError("tub.location uses AUTO")
|
||||
local_addresses = iputil.get_local_addresses_sync()
|
||||
local_addresses = get_local_addresses_sync()
|
||||
# tubport must be like "tcp:12345" or "tcp:12345:morestuff"
|
||||
local_portnum = int(tubport.split(":")[1])
|
||||
new_locations = []
|
||||
@ -852,6 +882,33 @@ def _tub_portlocation(config):
|
||||
return tubport, location
|
||||
|
||||
|
||||
def tub_listen_on(i2p_provider, tor_provider, tub, tubport, location):
|
||||
"""
|
||||
Assign a Tub its listener locations.
|
||||
|
||||
:param i2p_provider: See ``allmydata.util.i2p_provider.create``.
|
||||
:param tor_provider: See ``allmydata.util.tor_provider.create``.
|
||||
"""
|
||||
for port in tubport.split(","):
|
||||
if port == "listen:i2p":
|
||||
# the I2P provider will read its section of tahoe.cfg and
|
||||
# return either a fully-formed Endpoint, or a descriptor
|
||||
# that will create one, so we don't have to stuff all the
|
||||
# options into the tub.port string (which would need a lot
|
||||
# of escaping)
|
||||
port_or_endpoint = i2p_provider.get_listener()
|
||||
elif port == "listen:tor":
|
||||
port_or_endpoint = tor_provider.get_listener()
|
||||
else:
|
||||
port_or_endpoint = port
|
||||
# Foolscap requires native strings:
|
||||
if isinstance(port_or_endpoint, (bytes, str)):
|
||||
port_or_endpoint = ensure_str(port_or_endpoint)
|
||||
tub.listenOn(port_or_endpoint)
|
||||
# This last step makes the Tub is ready for tub.registerReference()
|
||||
tub.setLocation(location)
|
||||
|
||||
|
||||
def create_main_tub(config, tub_options,
|
||||
default_connection_handlers, foolscap_connection_handlers,
|
||||
i2p_provider, tor_provider,
|
||||
@ -876,36 +933,34 @@ def create_main_tub(config, tub_options,
|
||||
:param tor_provider: None, or a _Provider instance if txtorcon +
|
||||
Tor are installed.
|
||||
"""
|
||||
portlocation = _tub_portlocation(config)
|
||||
portlocation = _tub_portlocation(
|
||||
config,
|
||||
iputil.get_local_addresses_sync,
|
||||
iputil.allocate_tcp_port,
|
||||
)
|
||||
|
||||
certfile = config.get_private_path("node.pem") # FIXME? "node.pem" was the CERTFILE option/thing
|
||||
tub = create_tub(tub_options, default_connection_handlers, foolscap_connection_handlers,
|
||||
handler_overrides=handler_overrides, certFile=certfile)
|
||||
# FIXME? "node.pem" was the CERTFILE option/thing
|
||||
certfile = config.get_private_path("node.pem")
|
||||
|
||||
if portlocation:
|
||||
tubport, location = portlocation
|
||||
for port in tubport.split(","):
|
||||
if port == "listen:i2p":
|
||||
# the I2P provider will read its section of tahoe.cfg and
|
||||
# return either a fully-formed Endpoint, or a descriptor
|
||||
# that will create one, so we don't have to stuff all the
|
||||
# options into the tub.port string (which would need a lot
|
||||
# of escaping)
|
||||
port_or_endpoint = i2p_provider.get_listener()
|
||||
elif port == "listen:tor":
|
||||
port_or_endpoint = tor_provider.get_listener()
|
||||
else:
|
||||
port_or_endpoint = port
|
||||
# Foolscap requires native strings:
|
||||
if isinstance(port_or_endpoint, (bytes, str)):
|
||||
port_or_endpoint = ensure_str(port_or_endpoint)
|
||||
tub.listenOn(port_or_endpoint)
|
||||
tub.setLocation(location)
|
||||
log.msg("Tub location set to %s" % (location,))
|
||||
# the Tub is now ready for tub.registerReference()
|
||||
else:
|
||||
tub = create_tub(
|
||||
tub_options,
|
||||
default_connection_handlers,
|
||||
foolscap_connection_handlers,
|
||||
handler_overrides=handler_overrides,
|
||||
certFile=certfile,
|
||||
)
|
||||
if portlocation is None:
|
||||
log.msg("Tub is not listening")
|
||||
|
||||
else:
|
||||
tubport, location = portlocation
|
||||
tub_listen_on(
|
||||
i2p_provider,
|
||||
tor_provider,
|
||||
tub,
|
||||
tubport,
|
||||
location,
|
||||
)
|
||||
log.msg("Tub location set to %s" % (location,))
|
||||
return tub
|
||||
|
||||
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import weakref
|
||||
from zope.interface import implementer
|
||||
from allmydata.util.assertutil import precondition
|
||||
@ -126,7 +138,7 @@ class NodeMaker(object):
|
||||
|
||||
def create_new_mutable_directory(self, initial_children={}, version=None):
|
||||
# initial_children must have metadata (i.e. {} instead of None)
|
||||
for (name, (node, metadata)) in initial_children.iteritems():
|
||||
for (name, (node, metadata)) in initial_children.items():
|
||||
precondition(isinstance(metadata, dict),
|
||||
"create_new_mutable_directory requires metadata to be a dict, not None", metadata)
|
||||
node.raise_error()
|
||||
|
@ -37,7 +37,7 @@ class BaseOptions(usage.Options):
|
||||
super(BaseOptions, self).__init__()
|
||||
self.command_name = os.path.basename(sys.argv[0])
|
||||
|
||||
# Only allow "tahoe --version", not e.g. "tahoe start --version"
|
||||
# Only allow "tahoe --version", not e.g. "tahoe <cmd> --version"
|
||||
def opt_version(self):
|
||||
raise usage.UsageError("--version not allowed on subcommands")
|
||||
|
||||
|
@ -1,5 +1,7 @@
|
||||
from __future__ import print_function
|
||||
|
||||
from future.utils import bchr
|
||||
|
||||
# do not import any allmydata modules at this level. Do that from inside
|
||||
# individual functions instead.
|
||||
import struct, time, os, sys
|
||||
@ -905,7 +907,7 @@ def corrupt_share(options):
|
||||
f = open(fn, "rb+")
|
||||
f.seek(offset)
|
||||
d = f.read(1)
|
||||
d = chr(ord(d) ^ 0x01)
|
||||
d = bchr(ord(d) ^ 0x01)
|
||||
f.seek(offset)
|
||||
f.write(d)
|
||||
f.close()
|
||||
@ -920,7 +922,7 @@ def corrupt_share(options):
|
||||
f.seek(m.DATA_OFFSET)
|
||||
data = f.read(2000)
|
||||
# make sure this slot contains an SMDF share
|
||||
assert data[0] == b"\x00", "non-SDMF mutable shares not supported"
|
||||
assert data[0:1] == b"\x00", "non-SDMF mutable shares not supported"
|
||||
f.close()
|
||||
|
||||
(version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
|
||||
|
@ -1,261 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os, sys
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from twisted.scripts import twistd
|
||||
from twisted.python import usage
|
||||
from twisted.python.reflect import namedAny
|
||||
from twisted.internet.defer import maybeDeferred, fail
|
||||
from twisted.application.service import Service
|
||||
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util import fileutil
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
|
||||
from allmydata.util.configutil import UnknownConfigError
|
||||
from allmydata.util.deferredutil import HookMixin
|
||||
|
||||
|
||||
def get_pidfile(basedir):
|
||||
"""
|
||||
Returns the path to the PID file.
|
||||
:param basedir: the node's base directory
|
||||
:returns: the path to the PID file
|
||||
"""
|
||||
return os.path.join(basedir, u"twistd.pid")
|
||||
|
||||
def get_pid_from_pidfile(pidfile):
|
||||
"""
|
||||
Tries to read and return the PID stored in the node's PID file
|
||||
(twistd.pid).
|
||||
:param pidfile: try to read this PID file
|
||||
:returns: A numeric PID on success, ``None`` if PID file absent or
|
||||
inaccessible, ``-1`` if PID file invalid.
|
||||
"""
|
||||
try:
|
||||
with open(pidfile, "r") as f:
|
||||
pid = f.read()
|
||||
except EnvironmentError:
|
||||
return None
|
||||
|
||||
try:
|
||||
pid = int(pid)
|
||||
except ValueError:
|
||||
return -1
|
||||
|
||||
return pid
|
||||
|
||||
def identify_node_type(basedir):
|
||||
"""
|
||||
:return unicode: None or one of: 'client', 'introducer', or
|
||||
'key-generator'
|
||||
"""
|
||||
tac = u''
|
||||
try:
|
||||
for fn in listdir_unicode(basedir):
|
||||
if fn.endswith(u".tac"):
|
||||
tac = fn
|
||||
break
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
for t in (u"client", u"introducer", u"key-generator"):
|
||||
if t in tac:
|
||||
return t
|
||||
return None
|
||||
|
||||
|
||||
class RunOptions(BasedirOptions):
|
||||
optParameters = [
|
||||
("basedir", "C", None,
|
||||
"Specify which Tahoe base directory should be used."
|
||||
" This has the same effect as the global --node-directory option."
|
||||
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
|
||||
]
|
||||
|
||||
def parseArgs(self, basedir=None, *twistd_args):
|
||||
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
|
||||
# looks like an option to the tahoe subcommand, not to twistd. So you
|
||||
# can either use 'tahoe start' or 'tahoe start NODEDIR
|
||||
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
|
||||
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
|
||||
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
self.twistd_args = twistd_args
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] %s [options]"
|
||||
" [NODEDIR [twistd-options]]"
|
||||
% (self.command_name, self.subcommand_name))
|
||||
|
||||
def getUsage(self, width=None):
|
||||
t = BasedirOptions.getUsage(self, width) + "\n"
|
||||
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
|
||||
t += twistd_options.replace("Options:", "twistd-options:", 1)
|
||||
t += """
|
||||
|
||||
Note that if any twistd-options are used, NODEDIR must be specified explicitly
|
||||
(not by default or using -C/--basedir or -d/--node-directory), and followed by
|
||||
the twistd-options.
|
||||
"""
|
||||
return t
|
||||
|
||||
|
||||
class MyTwistdConfig(twistd.ServerOptions):
|
||||
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
|
||||
|
||||
stderr = sys.stderr
|
||||
|
||||
|
||||
class DaemonizeTheRealService(Service, HookMixin):
|
||||
"""
|
||||
this HookMixin should really be a helper; our hooks:
|
||||
|
||||
- 'running': triggered when startup has completed; it triggers
|
||||
with None of successful or a Failure otherwise.
|
||||
"""
|
||||
stderr = sys.stderr
|
||||
|
||||
def __init__(self, nodetype, basedir, options):
|
||||
super(DaemonizeTheRealService, self).__init__()
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
# setup for HookMixin
|
||||
self._hooks = {
|
||||
"running": None,
|
||||
}
|
||||
self.stderr = options.parent.stderr
|
||||
|
||||
def startService(self):
|
||||
|
||||
def key_generator_removed():
|
||||
return fail(ValueError("key-generator support removed, see #2783"))
|
||||
|
||||
def start():
|
||||
node_to_instance = {
|
||||
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
|
||||
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
|
||||
u"key-generator": key_generator_removed,
|
||||
}
|
||||
|
||||
try:
|
||||
service_factory = node_to_instance[self.nodetype]
|
||||
except KeyError:
|
||||
raise ValueError("unknown nodetype %s" % self.nodetype)
|
||||
|
||||
def handle_config_error(fail):
|
||||
if fail.check(UnknownConfigError):
|
||||
self.stderr.write("\nConfiguration error:\n{}\n\n".format(fail.value))
|
||||
else:
|
||||
self.stderr.write("\nUnknown error\n")
|
||||
fail.printTraceback(self.stderr)
|
||||
reactor.stop()
|
||||
|
||||
d = service_factory()
|
||||
|
||||
def created(srv):
|
||||
srv.setServiceParent(self.parent)
|
||||
d.addCallback(created)
|
||||
d.addErrback(handle_config_error)
|
||||
d.addBoth(self._call_hook, 'running')
|
||||
return d
|
||||
|
||||
from twisted.internet import reactor
|
||||
reactor.callWhenRunning(start)
|
||||
|
||||
|
||||
class DaemonizeTahoeNodePlugin(object):
|
||||
tapname = "tahoenode"
|
||||
def __init__(self, nodetype, basedir):
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
|
||||
def makeService(self, so):
|
||||
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
|
||||
|
||||
|
||||
def run(config):
|
||||
"""
|
||||
Runs a Tahoe-LAFS node in the foreground.
|
||||
|
||||
Sets up the IService instance corresponding to the type of node
|
||||
that's starting and uses Twisted's twistd runner to disconnect our
|
||||
process from the terminal.
|
||||
"""
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
|
||||
if not os.path.isdir(basedir):
|
||||
print("%s does not look like a directory at all" % quoted_basedir, file=err)
|
||||
return 1
|
||||
nodetype = identify_node_type(basedir)
|
||||
if not nodetype:
|
||||
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
|
||||
return 1
|
||||
# Now prepare to turn into a twistd process. This os.chdir is the point
|
||||
# of no return.
|
||||
os.chdir(basedir)
|
||||
twistd_args = []
|
||||
if (nodetype in (u"client", u"introducer")
|
||||
and "--nodaemon" not in config.twistd_args
|
||||
and "--syslog" not in config.twistd_args
|
||||
and "--logfile" not in config.twistd_args):
|
||||
fileutil.make_dirs(os.path.join(basedir, u"logs"))
|
||||
twistd_args.extend(["--logfile", os.path.join("logs", "twistd.log")])
|
||||
twistd_args.extend(config.twistd_args)
|
||||
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
|
||||
|
||||
twistd_config = MyTwistdConfig()
|
||||
twistd_config.stdout = out
|
||||
twistd_config.stderr = err
|
||||
try:
|
||||
twistd_config.parseOptions(twistd_args)
|
||||
except usage.error as ue:
|
||||
# these arguments were unsuitable for 'twistd'
|
||||
print(config, file=err)
|
||||
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
|
||||
return 1
|
||||
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
|
||||
|
||||
# handle invalid PID file (twistd might not start otherwise)
|
||||
pidfile = get_pidfile(basedir)
|
||||
if get_pid_from_pidfile(pidfile) == -1:
|
||||
print("found invalid PID file in %s - deleting it" % basedir, file=err)
|
||||
os.remove(pidfile)
|
||||
|
||||
# On Unix-like platforms:
|
||||
# Unless --nodaemon was provided, the twistd.runApp() below spawns off a
|
||||
# child process, and the parent calls os._exit(0), so there's no way for
|
||||
# us to get control afterwards, even with 'except SystemExit'. If
|
||||
# application setup fails (e.g. ImportError), runApp() will raise an
|
||||
# exception.
|
||||
#
|
||||
# So if we wanted to do anything with the running child, we'd have two
|
||||
# options:
|
||||
#
|
||||
# * fork first, and have our child wait for the runApp() child to get
|
||||
# running. (note: just fork(). This is easier than fork+exec, since we
|
||||
# don't have to get PATH and PYTHONPATH set up, since we're not
|
||||
# starting a *different* process, just cloning a new instance of the
|
||||
# current process)
|
||||
# * or have the user run a separate command some time after this one
|
||||
# exits.
|
||||
#
|
||||
# For Tahoe, we don't need to do anything with the child, so we can just
|
||||
# let it exit.
|
||||
#
|
||||
# On Windows:
|
||||
# twistd does not fork; it just runs in the current process whether or not
|
||||
# --nodaemon is specified. (As on Unix, --nodaemon does have the side effect
|
||||
# of causing us to log to stdout/stderr.)
|
||||
|
||||
if "--nodaemon" in twistd_args or sys.platform == "win32":
|
||||
verb = "running"
|
||||
else:
|
||||
verb = "starting"
|
||||
|
||||
print("%s node in %s" % (verb, quoted_basedir), file=out)
|
||||
twistd.runApp(twistd_config)
|
||||
# we should only reach here if --nodaemon or equivalent was used
|
||||
return 0
|
@ -9,8 +9,7 @@ from twisted.internet import defer, task, threads
|
||||
|
||||
from allmydata.scripts.common import get_default_nodedir
|
||||
from allmydata.scripts import debug, create_node, cli, \
|
||||
admin, tahoe_daemonize, tahoe_start, \
|
||||
tahoe_stop, tahoe_restart, tahoe_run, tahoe_invite
|
||||
admin, tahoe_run, tahoe_invite
|
||||
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
|
||||
from allmydata.util.eliotutil import (
|
||||
opt_eliot_destination,
|
||||
@ -37,19 +36,11 @@ if _default_nodedir:
|
||||
|
||||
# XXX all this 'dispatch' stuff needs to be unified + fixed up
|
||||
_control_node_dispatch = {
|
||||
"daemonize": tahoe_daemonize.daemonize,
|
||||
"start": tahoe_start.start,
|
||||
"run": tahoe_run.run,
|
||||
"stop": tahoe_stop.stop,
|
||||
"restart": tahoe_restart.restart,
|
||||
}
|
||||
|
||||
process_control_commands = [
|
||||
["run", None, tahoe_run.RunOptions, "run a node without daemonizing"],
|
||||
["daemonize", None, tahoe_daemonize.DaemonizeOptions, "(deprecated) run a node in the background"],
|
||||
["start", None, tahoe_start.StartOptions, "(deprecated) start a node in the background and confirm it started"],
|
||||
["stop", None, tahoe_stop.StopOptions, "(deprecated) stop a node"],
|
||||
["restart", None, tahoe_restart.RestartOptions, "(deprecated) restart a node"],
|
||||
]
|
||||
|
||||
|
||||
|
@ -1,16 +0,0 @@
|
||||
from .run_common import (
|
||||
RunOptions as _RunOptions,
|
||||
run,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"DaemonizeOptions",
|
||||
"daemonize",
|
||||
]
|
||||
|
||||
class DaemonizeOptions(_RunOptions):
|
||||
subcommand_name = "daemonize"
|
||||
|
||||
def daemonize(config):
|
||||
print("'tahoe daemonize' is deprecated; see 'tahoe run'")
|
||||
return run(config)
|
@ -1,21 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
from .tahoe_start import StartOptions, start
|
||||
from .tahoe_stop import stop, COULD_NOT_STOP
|
||||
|
||||
|
||||
class RestartOptions(StartOptions):
|
||||
subcommand_name = "restart"
|
||||
|
||||
|
||||
def restart(config):
|
||||
print("'tahoe restart' is deprecated; see 'tahoe run'")
|
||||
stderr = config.stderr
|
||||
rc = stop(config)
|
||||
if rc == COULD_NOT_STOP:
|
||||
print("ignoring couldn't-stop", file=stderr)
|
||||
rc = 0
|
||||
if rc:
|
||||
print("not restarting", file=stderr)
|
||||
return rc
|
||||
return start(config)
|
@ -1,15 +1,233 @@
|
||||
from .run_common import (
|
||||
RunOptions as _RunOptions,
|
||||
run,
|
||||
)
|
||||
from __future__ import print_function
|
||||
|
||||
__all__ = [
|
||||
"RunOptions",
|
||||
"run",
|
||||
]
|
||||
|
||||
class RunOptions(_RunOptions):
|
||||
import os, sys
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from twisted.scripts import twistd
|
||||
from twisted.python import usage
|
||||
from twisted.python.reflect import namedAny
|
||||
from twisted.internet.defer import maybeDeferred
|
||||
from twisted.application.service import Service
|
||||
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util.encodingutil import listdir_unicode, quote_local_unicode_path
|
||||
from allmydata.util.configutil import UnknownConfigError
|
||||
from allmydata.util.deferredutil import HookMixin
|
||||
|
||||
from allmydata.node import (
|
||||
PortAssignmentRequired,
|
||||
PrivacyError,
|
||||
)
|
||||
|
||||
def get_pidfile(basedir):
|
||||
"""
|
||||
Returns the path to the PID file.
|
||||
:param basedir: the node's base directory
|
||||
:returns: the path to the PID file
|
||||
"""
|
||||
return os.path.join(basedir, u"twistd.pid")
|
||||
|
||||
def get_pid_from_pidfile(pidfile):
|
||||
"""
|
||||
Tries to read and return the PID stored in the node's PID file
|
||||
(twistd.pid).
|
||||
:param pidfile: try to read this PID file
|
||||
:returns: A numeric PID on success, ``None`` if PID file absent or
|
||||
inaccessible, ``-1`` if PID file invalid.
|
||||
"""
|
||||
try:
|
||||
with open(pidfile, "r") as f:
|
||||
pid = f.read()
|
||||
except EnvironmentError:
|
||||
return None
|
||||
|
||||
try:
|
||||
pid = int(pid)
|
||||
except ValueError:
|
||||
return -1
|
||||
|
||||
return pid
|
||||
|
||||
def identify_node_type(basedir):
|
||||
"""
|
||||
:return unicode: None or one of: 'client' or 'introducer'.
|
||||
"""
|
||||
tac = u''
|
||||
try:
|
||||
for fn in listdir_unicode(basedir):
|
||||
if fn.endswith(u".tac"):
|
||||
tac = fn
|
||||
break
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
for t in (u"client", u"introducer"):
|
||||
if t in tac:
|
||||
return t
|
||||
return None
|
||||
|
||||
|
||||
class RunOptions(BasedirOptions):
|
||||
subcommand_name = "run"
|
||||
|
||||
def postOptions(self):
|
||||
self.twistd_args += ("--nodaemon",)
|
||||
optParameters = [
|
||||
("basedir", "C", None,
|
||||
"Specify which Tahoe base directory should be used."
|
||||
" This has the same effect as the global --node-directory option."
|
||||
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
|
||||
]
|
||||
|
||||
def parseArgs(self, basedir=None, *twistd_args):
|
||||
# This can't handle e.g. 'tahoe run --reactor=foo', since
|
||||
# '--reactor=foo' looks like an option to the tahoe subcommand, not to
|
||||
# twistd. So you can either use 'tahoe run' or 'tahoe run NODEDIR
|
||||
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR run
|
||||
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
|
||||
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
self.twistd_args = twistd_args
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] %s [options]"
|
||||
" [NODEDIR [twistd-options]]"
|
||||
% (self.command_name, self.subcommand_name))
|
||||
|
||||
def getUsage(self, width=None):
|
||||
t = BasedirOptions.getUsage(self, width) + "\n"
|
||||
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
|
||||
t += twistd_options.replace("Options:", "twistd-options:", 1)
|
||||
t += """
|
||||
|
||||
Note that if any twistd-options are used, NODEDIR must be specified explicitly
|
||||
(not by default or using -C/--basedir or -d/--node-directory), and followed by
|
||||
the twistd-options.
|
||||
"""
|
||||
return t
|
||||
|
||||
|
||||
class MyTwistdConfig(twistd.ServerOptions):
|
||||
subCommands = [("DaemonizeTahoeNode", None, usage.Options, "node")]
|
||||
|
||||
stderr = sys.stderr
|
||||
|
||||
|
||||
class DaemonizeTheRealService(Service, HookMixin):
|
||||
"""
|
||||
this HookMixin should really be a helper; our hooks:
|
||||
|
||||
- 'running': triggered when startup has completed; it triggers
|
||||
with None of successful or a Failure otherwise.
|
||||
"""
|
||||
stderr = sys.stderr
|
||||
|
||||
def __init__(self, nodetype, basedir, options):
|
||||
super(DaemonizeTheRealService, self).__init__()
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
# setup for HookMixin
|
||||
self._hooks = {
|
||||
"running": None,
|
||||
}
|
||||
self.stderr = options.parent.stderr
|
||||
|
||||
def startService(self):
|
||||
|
||||
def start():
|
||||
node_to_instance = {
|
||||
u"client": lambda: maybeDeferred(namedAny("allmydata.client.create_client"), self.basedir),
|
||||
u"introducer": lambda: maybeDeferred(namedAny("allmydata.introducer.server.create_introducer"), self.basedir),
|
||||
}
|
||||
|
||||
try:
|
||||
service_factory = node_to_instance[self.nodetype]
|
||||
except KeyError:
|
||||
raise ValueError("unknown nodetype %s" % self.nodetype)
|
||||
|
||||
def handle_config_error(reason):
|
||||
if reason.check(UnknownConfigError):
|
||||
self.stderr.write("\nConfiguration error:\n{}\n\n".format(reason.value))
|
||||
elif reason.check(PortAssignmentRequired):
|
||||
self.stderr.write("\ntub.port cannot be 0: you must choose.\n\n")
|
||||
elif reason.check(PrivacyError):
|
||||
self.stderr.write("\n{}\n\n".format(reason.value))
|
||||
else:
|
||||
self.stderr.write("\nUnknown error\n")
|
||||
reason.printTraceback(self.stderr)
|
||||
reactor.stop()
|
||||
|
||||
d = service_factory()
|
||||
|
||||
def created(srv):
|
||||
srv.setServiceParent(self.parent)
|
||||
d.addCallback(created)
|
||||
d.addErrback(handle_config_error)
|
||||
d.addBoth(self._call_hook, 'running')
|
||||
return d
|
||||
|
||||
from twisted.internet import reactor
|
||||
reactor.callWhenRunning(start)
|
||||
|
||||
|
||||
class DaemonizeTahoeNodePlugin(object):
|
||||
tapname = "tahoenode"
|
||||
def __init__(self, nodetype, basedir):
|
||||
self.nodetype = nodetype
|
||||
self.basedir = basedir
|
||||
|
||||
def makeService(self, so):
|
||||
return DaemonizeTheRealService(self.nodetype, self.basedir, so)
|
||||
|
||||
|
||||
def run(config):
|
||||
"""
|
||||
Runs a Tahoe-LAFS node in the foreground.
|
||||
|
||||
Sets up the IService instance corresponding to the type of node
|
||||
that's starting and uses Twisted's twistd runner to disconnect our
|
||||
process from the terminal.
|
||||
"""
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("'tahoe {}' in {}".format(config.subcommand_name, quoted_basedir), file=out)
|
||||
if not os.path.isdir(basedir):
|
||||
print("%s does not look like a directory at all" % quoted_basedir, file=err)
|
||||
return 1
|
||||
nodetype = identify_node_type(basedir)
|
||||
if not nodetype:
|
||||
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
|
||||
return 1
|
||||
# Now prepare to turn into a twistd process. This os.chdir is the point
|
||||
# of no return.
|
||||
os.chdir(basedir)
|
||||
twistd_args = ["--nodaemon"]
|
||||
twistd_args.extend(config.twistd_args)
|
||||
twistd_args.append("DaemonizeTahoeNode") # point at our DaemonizeTahoeNodePlugin
|
||||
|
||||
twistd_config = MyTwistdConfig()
|
||||
twistd_config.stdout = out
|
||||
twistd_config.stderr = err
|
||||
try:
|
||||
twistd_config.parseOptions(twistd_args)
|
||||
except usage.error as ue:
|
||||
# these arguments were unsuitable for 'twistd'
|
||||
print(config, file=err)
|
||||
print("tahoe %s: usage error from twistd: %s\n" % (config.subcommand_name, ue), file=err)
|
||||
return 1
|
||||
twistd_config.loadedPlugins = {"DaemonizeTahoeNode": DaemonizeTahoeNodePlugin(nodetype, basedir)}
|
||||
|
||||
# handle invalid PID file (twistd might not start otherwise)
|
||||
pidfile = get_pidfile(basedir)
|
||||
if get_pid_from_pidfile(pidfile) == -1:
|
||||
print("found invalid PID file in %s - deleting it" % basedir, file=err)
|
||||
os.remove(pidfile)
|
||||
|
||||
# We always pass --nodaemon so twistd.runApp does not daemonize.
|
||||
print("running node in %s" % (quoted_basedir,), file=out)
|
||||
twistd.runApp(twistd_config)
|
||||
return 0
|
||||
|
@ -1,152 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import io
|
||||
import sys
|
||||
import time
|
||||
import subprocess
|
||||
from os.path import join, exists
|
||||
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from allmydata.scripts.default_nodedir import _default_nodedir
|
||||
from allmydata.util.encodingutil import quote_local_unicode_path
|
||||
|
||||
from .run_common import MyTwistdConfig, identify_node_type
|
||||
|
||||
|
||||
class StartOptions(BasedirOptions):
|
||||
subcommand_name = "start"
|
||||
optParameters = [
|
||||
("basedir", "C", None,
|
||||
"Specify which Tahoe base directory should be used."
|
||||
" This has the same effect as the global --node-directory option."
|
||||
" [default: %s]" % quote_local_unicode_path(_default_nodedir)),
|
||||
]
|
||||
|
||||
def parseArgs(self, basedir=None, *twistd_args):
|
||||
# This can't handle e.g. 'tahoe start --nodaemon', since '--nodaemon'
|
||||
# looks like an option to the tahoe subcommand, not to twistd. So you
|
||||
# can either use 'tahoe start' or 'tahoe start NODEDIR
|
||||
# --TWISTD-OPTIONS'. Note that 'tahoe --node-directory=NODEDIR start
|
||||
# --TWISTD-OPTIONS' also isn't allowed, unfortunately.
|
||||
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
self.twistd_args = twistd_args
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] %s [options]"
|
||||
" [NODEDIR [twistd-options]]"
|
||||
% (self.command_name, self.subcommand_name))
|
||||
|
||||
def getUsage(self, width=None):
|
||||
t = BasedirOptions.getUsage(self, width) + "\n"
|
||||
twistd_options = str(MyTwistdConfig()).partition("\n")[2].partition("\n\n")[0]
|
||||
t += twistd_options.replace("Options:", "twistd-options:", 1)
|
||||
t += """
|
||||
|
||||
Note that if any twistd-options are used, NODEDIR must be specified explicitly
|
||||
(not by default or using -C/--basedir or -d/--node-directory), and followed by
|
||||
the twistd-options.
|
||||
"""
|
||||
return t
|
||||
|
||||
|
||||
def start(config):
|
||||
"""
|
||||
Start a tahoe node (daemonize it and confirm startup)
|
||||
|
||||
We run 'tahoe daemonize' with all the options given to 'tahoe
|
||||
start' and then watch the log files for the correct text to appear
|
||||
(e.g. "introducer started"). If that doesn't happen within a few
|
||||
seconds, an error is printed along with all collected logs.
|
||||
"""
|
||||
print("'tahoe start' is deprecated; see 'tahoe run'")
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("STARTING", quoted_basedir, file=out)
|
||||
if not os.path.isdir(basedir):
|
||||
print("%s does not look like a directory at all" % quoted_basedir, file=err)
|
||||
return 1
|
||||
nodetype = identify_node_type(basedir)
|
||||
if not nodetype:
|
||||
print("%s is not a recognizable node directory" % quoted_basedir, file=err)
|
||||
return 1
|
||||
|
||||
# "tahoe start" attempts to monitor the logs for successful
|
||||
# startup -- but we can't always do that.
|
||||
|
||||
can_monitor_logs = False
|
||||
if (nodetype in (u"client", u"introducer")
|
||||
and "--nodaemon" not in config.twistd_args
|
||||
and "--syslog" not in config.twistd_args
|
||||
and "--logfile" not in config.twistd_args):
|
||||
can_monitor_logs = True
|
||||
|
||||
if "--help" in config.twistd_args:
|
||||
return 0
|
||||
|
||||
if not can_monitor_logs:
|
||||
print("Custom logging options; can't monitor logs for proper startup messages", file=out)
|
||||
return 1
|
||||
|
||||
# before we spawn tahoe, we check if "the log file" exists or not,
|
||||
# and if so remember how big it is -- essentially, we're doing
|
||||
# "tail -f" to see what "this" incarnation of "tahoe daemonize"
|
||||
# spews forth.
|
||||
starting_offset = 0
|
||||
log_fname = join(basedir, 'logs', 'twistd.log')
|
||||
if exists(log_fname):
|
||||
with open(log_fname, 'r') as f:
|
||||
f.seek(0, 2)
|
||||
starting_offset = f.tell()
|
||||
|
||||
# spawn tahoe. Note that since this daemonizes, it should return
|
||||
# "pretty fast" and with a zero return-code, or else something
|
||||
# Very Bad has happened.
|
||||
try:
|
||||
args = [sys.executable] if not getattr(sys, 'frozen', False) else []
|
||||
for i, arg in enumerate(sys.argv):
|
||||
if arg in ['start', 'restart']:
|
||||
args.append('daemonize')
|
||||
else:
|
||||
args.append(arg)
|
||||
subprocess.check_call(args)
|
||||
except subprocess.CalledProcessError as e:
|
||||
return e.returncode
|
||||
|
||||
# now, we have to determine if tahoe has actually started up
|
||||
# successfully or not. so, we start sucking up log files and
|
||||
# looking for "the magic string", which depends on the node type.
|
||||
|
||||
magic_string = u'{} running'.format(nodetype)
|
||||
with io.open(log_fname, 'r') as f:
|
||||
f.seek(starting_offset)
|
||||
|
||||
collected = u''
|
||||
overall_start = time.time()
|
||||
while time.time() - overall_start < 60:
|
||||
this_start = time.time()
|
||||
while time.time() - this_start < 5:
|
||||
collected += f.read()
|
||||
if magic_string in collected:
|
||||
if not config.parent['quiet']:
|
||||
print("Node has started successfully", file=out)
|
||||
return 0
|
||||
if 'Traceback ' in collected:
|
||||
print("Error starting node; see '{}' for more:\n\n{}".format(
|
||||
log_fname,
|
||||
collected,
|
||||
), file=err)
|
||||
return 1
|
||||
time.sleep(0.1)
|
||||
print("Still waiting up to {}s for node startup".format(
|
||||
60 - int(time.time() - overall_start)
|
||||
), file=out)
|
||||
|
||||
print("Something has gone wrong starting the node.", file=out)
|
||||
print("Logs are available in '{}'".format(log_fname), file=out)
|
||||
print("Collected for this run:", file=out)
|
||||
print(collected, file=out)
|
||||
return 1
|
@ -1,85 +0,0 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import time
|
||||
import signal
|
||||
|
||||
from allmydata.scripts.common import BasedirOptions
|
||||
from allmydata.util.encodingutil import quote_local_unicode_path
|
||||
from .run_common import get_pidfile, get_pid_from_pidfile
|
||||
|
||||
COULD_NOT_STOP = 2
|
||||
|
||||
|
||||
class StopOptions(BasedirOptions):
|
||||
def parseArgs(self, basedir=None):
|
||||
BasedirOptions.parseArgs(self, basedir)
|
||||
|
||||
def getSynopsis(self):
|
||||
return ("Usage: %s [global-options] stop [options] [NODEDIR]"
|
||||
% (self.command_name,))
|
||||
|
||||
|
||||
def stop(config):
|
||||
print("'tahoe stop' is deprecated; see 'tahoe run'")
|
||||
out = config.stdout
|
||||
err = config.stderr
|
||||
basedir = config['basedir']
|
||||
quoted_basedir = quote_local_unicode_path(basedir)
|
||||
print("STOPPING", quoted_basedir, file=out)
|
||||
pidfile = get_pidfile(basedir)
|
||||
pid = get_pid_from_pidfile(pidfile)
|
||||
if pid is None:
|
||||
print("%s does not look like a running node directory (no twistd.pid)" % quoted_basedir, file=err)
|
||||
# we define rc=2 to mean "nothing is running, but it wasn't me who
|
||||
# stopped it"
|
||||
return COULD_NOT_STOP
|
||||
elif pid == -1:
|
||||
print("%s contains an invalid PID file" % basedir, file=err)
|
||||
# we define rc=2 to mean "nothing is running, but it wasn't me who
|
||||
# stopped it"
|
||||
return COULD_NOT_STOP
|
||||
|
||||
# kill it hard (SIGKILL), delete the twistd.pid file, then wait for the
|
||||
# process itself to go away. If it hasn't gone away after 20 seconds, warn
|
||||
# the user but keep waiting until they give up.
|
||||
try:
|
||||
os.kill(pid, signal.SIGKILL)
|
||||
except OSError as oserr:
|
||||
if oserr.errno == 3:
|
||||
print(oserr.strerror)
|
||||
# the process didn't exist, so wipe the pid file
|
||||
os.remove(pidfile)
|
||||
return COULD_NOT_STOP
|
||||
else:
|
||||
raise
|
||||
try:
|
||||
os.remove(pidfile)
|
||||
except EnvironmentError:
|
||||
pass
|
||||
start = time.time()
|
||||
time.sleep(0.1)
|
||||
wait = 40
|
||||
first_time = True
|
||||
while True:
|
||||
# poll once per second until we see the process is no longer running
|
||||
try:
|
||||
os.kill(pid, 0)
|
||||
except OSError:
|
||||
print("process %d is dead" % pid, file=out)
|
||||
return
|
||||
wait -= 1
|
||||
if wait < 0:
|
||||
if first_time:
|
||||
print("It looks like pid %d is still running "
|
||||
"after %d seconds" % (pid,
|
||||
(time.time() - start)), file=err)
|
||||
print("I will keep watching it until you interrupt me.", file=err)
|
||||
wait = 10
|
||||
first_time = False
|
||||
else:
|
||||
print("pid %d still running after %d seconds" % \
|
||||
(pid, (time.time() - start)), file=err)
|
||||
wait = 10
|
||||
time.sleep(1)
|
||||
# control never reaches here: no timeout
|
@ -508,6 +508,7 @@ class StorageFarmBroker(service.MultiService):
|
||||
@implementer(IDisplayableServer)
|
||||
class StubServer(object):
|
||||
def __init__(self, serverid):
|
||||
assert isinstance(serverid, bytes)
|
||||
self.serverid = serverid # binary tubid
|
||||
def get_serverid(self):
|
||||
return self.serverid
|
||||
|
@ -113,4 +113,5 @@ if sys.platform == "win32":
|
||||
initialize()
|
||||
|
||||
from eliot import to_file
|
||||
to_file(open("eliot.log", "w"))
|
||||
from allmydata.util.jsonbytes import BytesJSONEncoder
|
||||
to_file(open("eliot.log", "w"), encoder=BytesJSONEncoder)
|
||||
|
@ -16,22 +16,19 @@ that this script does not import anything from tahoe directly, so it doesn't
|
||||
matter what its PYTHONPATH is, as long as the bin/tahoe that it uses is
|
||||
functional.
|
||||
|
||||
This script expects that the client node will be not running when the script
|
||||
starts, but it will forcibly shut down the node just to be sure. It will shut
|
||||
down the node after the test finishes.
|
||||
This script expects the client node to be running already.
|
||||
|
||||
To set up the client node, do the following:
|
||||
|
||||
tahoe create-client DIR
|
||||
populate DIR/introducer.furl
|
||||
tahoe start DIR
|
||||
tahoe add-alias -d DIR testgrid `tahoe mkdir -d DIR`
|
||||
pick a 10kB-ish test file, compute its md5sum
|
||||
tahoe put -d DIR FILE testgrid:old.MD5SUM
|
||||
tahoe put -d DIR FILE testgrid:recent.MD5SUM
|
||||
tahoe put -d DIR FILE testgrid:recentdir/recent.MD5SUM
|
||||
echo "" | tahoe put -d DIR --mutable testgrid:log
|
||||
echo "" | tahoe put -d DIR --mutable testgrid:recentlog
|
||||
tahoe create-client --introducer=INTRODUCER_FURL DIR
|
||||
tahoe run DIR
|
||||
tahoe -d DIR create-alias testgrid
|
||||
# pick a 10kB-ish test file, compute its md5sum
|
||||
tahoe -d DIR put FILE testgrid:old.MD5SUM
|
||||
tahoe -d DIR put FILE testgrid:recent.MD5SUM
|
||||
tahoe -d DIR put FILE testgrid:recentdir/recent.MD5SUM
|
||||
echo "" | tahoe -d DIR put --mutable - testgrid:log
|
||||
echo "" | tahoe -d DIR put --mutable - testgrid:recentlog
|
||||
|
||||
This script will perform the following steps (the kind of compatibility that
|
||||
is being tested is in [brackets]):
|
||||
@ -52,7 +49,6 @@ is being tested is in [brackets]):
|
||||
|
||||
This script will also keep track of speeds and latencies and will write them
|
||||
in a machine-readable logfile.
|
||||
|
||||
"""
|
||||
|
||||
import time, subprocess, md5, os.path, random
|
||||
@ -104,26 +100,13 @@ class GridTester(object):
|
||||
|
||||
def cli(self, cmd, *args, **kwargs):
|
||||
print("tahoe", cmd, " ".join(args))
|
||||
stdout, stderr = self.command(self.tahoe, cmd, "-d", self.nodedir,
|
||||
stdout, stderr = self.command(self.tahoe, "-d", self.nodedir, cmd,
|
||||
*args, **kwargs)
|
||||
if not kwargs.get("ignore_stderr", False) and stderr != "":
|
||||
raise CommandFailed("command '%s' had stderr: %s" % (" ".join(args),
|
||||
stderr))
|
||||
return stdout
|
||||
|
||||
def stop_old_node(self):
|
||||
print("tahoe stop", self.nodedir, "(force)")
|
||||
self.command(self.tahoe, "stop", self.nodedir, expected_rc=None)
|
||||
|
||||
def start_node(self):
|
||||
print("tahoe start", self.nodedir)
|
||||
self.command(self.tahoe, "start", self.nodedir)
|
||||
time.sleep(5)
|
||||
|
||||
def stop_node(self):
|
||||
print("tahoe stop", self.nodedir)
|
||||
self.command(self.tahoe, "stop", self.nodedir)
|
||||
|
||||
def read_and_check(self, f):
|
||||
expected_md5_s = f[f.find(".")+1:]
|
||||
out = self.cli("get", "testgrid:" + f)
|
||||
@ -204,19 +187,11 @@ class GridTester(object):
|
||||
fn = prefix + "." + md5sum
|
||||
return fn, data
|
||||
|
||||
def run(self):
|
||||
self.stop_old_node()
|
||||
self.start_node()
|
||||
try:
|
||||
self.do_test()
|
||||
finally:
|
||||
self.stop_node()
|
||||
|
||||
def main():
|
||||
config = GridTesterOptions()
|
||||
config.parseOptions()
|
||||
gt = GridTester(config)
|
||||
gt.run()
|
||||
gt.do_test()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
@ -20,14 +20,14 @@ from allmydata.scripts.common_http import socket_error
|
||||
import allmydata.scripts.common_http
|
||||
|
||||
# Test that the scripts can be imported.
|
||||
from allmydata.scripts import create_node, debug, tahoe_start, tahoe_restart, \
|
||||
from allmydata.scripts import create_node, debug, \
|
||||
tahoe_add_alias, tahoe_backup, tahoe_check, tahoe_cp, tahoe_get, tahoe_ls, \
|
||||
tahoe_manifest, tahoe_mkdir, tahoe_mv, tahoe_put, tahoe_unlink, tahoe_webopen, \
|
||||
tahoe_stop, tahoe_daemonize, tahoe_run
|
||||
_hush_pyflakes = [create_node, debug, tahoe_start, tahoe_restart, tahoe_stop,
|
||||
tahoe_run
|
||||
_hush_pyflakes = [create_node, debug,
|
||||
tahoe_add_alias, tahoe_backup, tahoe_check, tahoe_cp, tahoe_get, tahoe_ls,
|
||||
tahoe_manifest, tahoe_mkdir, tahoe_mv, tahoe_put, tahoe_unlink, tahoe_webopen,
|
||||
tahoe_daemonize, tahoe_run]
|
||||
tahoe_run]
|
||||
|
||||
from allmydata.scripts import common
|
||||
from allmydata.scripts.common import DEFAULT_ALIAS, get_aliases, get_alias, \
|
||||
@ -626,18 +626,6 @@ class Help(unittest.TestCase):
|
||||
help = str(cli.ListAliasesOptions())
|
||||
self.failUnlessIn("[options]", help)
|
||||
|
||||
def test_start(self):
|
||||
help = str(tahoe_start.StartOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
|
||||
|
||||
def test_stop(self):
|
||||
help = str(tahoe_stop.StopOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR]", help)
|
||||
|
||||
def test_restart(self):
|
||||
help = str(tahoe_restart.RestartOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
|
||||
|
||||
def test_run(self):
|
||||
help = str(tahoe_run.RunOptions())
|
||||
self.failUnlessIn("[options] [NODEDIR [twistd-options]]", help)
|
||||
@ -1269,82 +1257,69 @@ class Options(ReallyEqualMixin, unittest.TestCase):
|
||||
self.failUnlessIn(allmydata.__full_version__, stdout.getvalue())
|
||||
# but "tahoe SUBCOMMAND --version" should be rejected
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--version"])
|
||||
["run", "--version"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--version-and-path"])
|
||||
["run", "--version-and-path"])
|
||||
|
||||
def test_quiet(self):
|
||||
# accepted as an overall option, but not on subcommands
|
||||
o = self.parse(["--quiet", "start"])
|
||||
o = self.parse(["--quiet", "run"])
|
||||
self.failUnless(o.parent["quiet"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--quiet"])
|
||||
["run", "--quiet"])
|
||||
|
||||
def test_basedir(self):
|
||||
# accept a --node-directory option before the verb, or a --basedir
|
||||
# option after, or a basedir argument after, but none in the wrong
|
||||
# place, and not more than one of the three.
|
||||
o = self.parse(["start"])
|
||||
|
||||
# Here is some option twistd recognizes but we don't. Depending on
|
||||
# where it appears, it should be passed through to twistd. It doesn't
|
||||
# really matter which option it is (it doesn't even have to be a valid
|
||||
# option). This test does not actually run any of the twistd argument
|
||||
# parsing.
|
||||
some_twistd_option = "--spew"
|
||||
|
||||
o = self.parse(["run"])
|
||||
self.failUnlessReallyEqual(o["basedir"], os.path.join(fileutil.abspath_expanduser_unicode(u"~"),
|
||||
u".tahoe"))
|
||||
o = self.parse(["start", "here"])
|
||||
o = self.parse(["run", "here"])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"here"))
|
||||
o = self.parse(["start", "--basedir", "there"])
|
||||
o = self.parse(["run", "--basedir", "there"])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"there"))
|
||||
o = self.parse(["--node-directory", "there", "start"])
|
||||
o = self.parse(["--node-directory", "there", "run"])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"there"))
|
||||
|
||||
o = self.parse(["start", "here", "--nodaemon"])
|
||||
o = self.parse(["run", "here", some_twistd_option])
|
||||
self.failUnlessReallyEqual(o["basedir"], fileutil.abspath_expanduser_unicode(u"here"))
|
||||
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--basedir", "there", "start"])
|
||||
["--basedir", "there", "run"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--node-directory", "there"])
|
||||
["run", "--node-directory", "there"])
|
||||
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there",
|
||||
"start", "--basedir=here"])
|
||||
"run", "--basedir=here"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--basedir=here", "anywhere"])
|
||||
["run", "--basedir=here", "anywhere"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there",
|
||||
"start", "anywhere"])
|
||||
"run", "anywhere"])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there",
|
||||
"start", "--basedir=here", "anywhere"])
|
||||
"run", "--basedir=here", "anywhere"])
|
||||
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["--node-directory=there", "start", "--nodaemon"])
|
||||
["--node-directory=there", "run", some_twistd_option])
|
||||
self.failUnlessRaises(usage.UsageError, self.parse,
|
||||
["start", "--basedir=here", "--nodaemon"])
|
||||
["run", "--basedir=here", some_twistd_option])
|
||||
|
||||
|
||||
class Stop(unittest.TestCase):
|
||||
def test_non_numeric_pid(self):
|
||||
"""
|
||||
If the pidfile exists but does not contain a numeric value, a complaint to
|
||||
this effect is written to stderr and the non-success result is
|
||||
returned.
|
||||
"""
|
||||
basedir = FilePath(self.mktemp().decode("ascii"))
|
||||
basedir.makedirs()
|
||||
basedir.child(u"twistd.pid").setContent(b"foo")
|
||||
class Run(unittest.TestCase):
|
||||
|
||||
config = tahoe_stop.StopOptions()
|
||||
config.stdout = StringIO()
|
||||
config.stderr = StringIO()
|
||||
config['basedir'] = basedir.path
|
||||
|
||||
result_code = tahoe_stop.stop(config)
|
||||
self.assertEqual(2, result_code)
|
||||
self.assertIn("invalid PID file", config.stderr.getvalue())
|
||||
|
||||
|
||||
class Start(unittest.TestCase):
|
||||
|
||||
@patch('allmydata.scripts.run_common.os.chdir')
|
||||
@patch('allmydata.scripts.run_common.twistd')
|
||||
@patch('allmydata.scripts.tahoe_run.os.chdir')
|
||||
@patch('allmydata.scripts.tahoe_run.twistd')
|
||||
def test_non_numeric_pid(self, mock_twistd, chdir):
|
||||
"""
|
||||
If the pidfile exists but does not contain a numeric value, a complaint to
|
||||
@ -1355,13 +1330,13 @@ class Start(unittest.TestCase):
|
||||
basedir.child(u"twistd.pid").setContent(b"foo")
|
||||
basedir.child(u"tahoe-client.tac").setContent(b"")
|
||||
|
||||
config = tahoe_daemonize.DaemonizeOptions()
|
||||
config = tahoe_run.RunOptions()
|
||||
config.stdout = StringIO()
|
||||
config.stderr = StringIO()
|
||||
config['basedir'] = basedir.path
|
||||
config.twistd_args = []
|
||||
|
||||
result_code = tahoe_daemonize.daemonize(config)
|
||||
result_code = tahoe_run.run(config)
|
||||
self.assertIn("invalid PID file", config.stderr.getvalue())
|
||||
self.assertTrue(len(mock_twistd.mock_calls), 1)
|
||||
self.assertEqual(mock_twistd.mock_calls[0][0], 'runApp')
|
||||
|
@ -1,202 +0,0 @@
|
||||
import os
|
||||
from io import (
|
||||
BytesIO,
|
||||
)
|
||||
from os.path import dirname, join
|
||||
from mock import patch, Mock
|
||||
from six.moves import StringIO
|
||||
from sys import getfilesystemencoding
|
||||
from twisted.trial import unittest
|
||||
from allmydata.scripts import runner
|
||||
from allmydata.scripts.run_common import (
|
||||
identify_node_type,
|
||||
DaemonizeTahoeNodePlugin,
|
||||
MyTwistdConfig,
|
||||
)
|
||||
from allmydata.scripts.tahoe_daemonize import (
|
||||
DaemonizeOptions,
|
||||
)
|
||||
|
||||
|
||||
class Util(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.twistd_options = MyTwistdConfig()
|
||||
self.twistd_options.parseOptions(["DaemonizeTahoeNode"])
|
||||
self.options = self.twistd_options.subOptions
|
||||
|
||||
def test_node_type_nothing(self):
|
||||
tmpdir = self.mktemp()
|
||||
base = dirname(tmpdir).decode(getfilesystemencoding())
|
||||
|
||||
t = identify_node_type(base)
|
||||
|
||||
self.assertIs(None, t)
|
||||
|
||||
def test_node_type_introducer(self):
|
||||
tmpdir = self.mktemp()
|
||||
base = dirname(tmpdir).decode(getfilesystemencoding())
|
||||
with open(join(dirname(tmpdir), 'introducer.tac'), 'w') as f:
|
||||
f.write("test placeholder")
|
||||
|
||||
t = identify_node_type(base)
|
||||
|
||||
self.assertEqual(u"introducer", t)
|
||||
|
||||
def test_daemonize(self):
|
||||
tmpdir = self.mktemp()
|
||||
plug = DaemonizeTahoeNodePlugin('client', tmpdir)
|
||||
|
||||
with patch('twisted.internet.reactor') as r:
|
||||
def call(fn, *args, **kw):
|
||||
fn()
|
||||
r.stop = lambda: None
|
||||
r.callWhenRunning = call
|
||||
service = plug.makeService(self.options)
|
||||
service.parent = Mock()
|
||||
service.startService()
|
||||
|
||||
self.assertTrue(service is not None)
|
||||
|
||||
def test_daemonize_no_keygen(self):
|
||||
tmpdir = self.mktemp()
|
||||
stderr = BytesIO()
|
||||
plug = DaemonizeTahoeNodePlugin('key-generator', tmpdir)
|
||||
|
||||
with patch('twisted.internet.reactor') as r:
|
||||
def call(fn, *args, **kw):
|
||||
d = fn()
|
||||
d.addErrback(lambda _: None) # ignore the error we'll trigger
|
||||
r.callWhenRunning = call
|
||||
service = plug.makeService(self.options)
|
||||
service.stderr = stderr
|
||||
service.parent = Mock()
|
||||
# we'll raise ValueError because there's no key-generator
|
||||
# .. BUT we do this in an async function called via
|
||||
# "callWhenRunning" .. hence using a hook
|
||||
d = service.set_hook('running')
|
||||
service.startService()
|
||||
def done(f):
|
||||
self.assertIn(
|
||||
"key-generator support removed",
|
||||
stderr.getvalue(),
|
||||
)
|
||||
return None
|
||||
d.addBoth(done)
|
||||
return d
|
||||
|
||||
def test_daemonize_unknown_nodetype(self):
|
||||
tmpdir = self.mktemp()
|
||||
plug = DaemonizeTahoeNodePlugin('an-unknown-service', tmpdir)
|
||||
|
||||
with patch('twisted.internet.reactor') as r:
|
||||
def call(fn, *args, **kw):
|
||||
fn()
|
||||
r.stop = lambda: None
|
||||
r.callWhenRunning = call
|
||||
service = plug.makeService(self.options)
|
||||
service.parent = Mock()
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
service.startService()
|
||||
self.assertIn(
|
||||
"unknown nodetype",
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_daemonize_options(self):
|
||||
parent = runner.Options()
|
||||
opts = DaemonizeOptions()
|
||||
opts.parent = parent
|
||||
opts.parseArgs()
|
||||
|
||||
# just gratuitous coverage, ensureing we don't blow up on
|
||||
# these methods.
|
||||
opts.getSynopsis()
|
||||
opts.getUsage()
|
||||
|
||||
|
||||
class RunDaemonizeTests(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
# no test should change our working directory
|
||||
self._working = os.path.abspath('.')
|
||||
d = super(RunDaemonizeTests, self).setUp()
|
||||
self._reactor = patch('twisted.internet.reactor')
|
||||
self._reactor.stop = lambda: None
|
||||
self._twistd = patch('allmydata.scripts.run_common.twistd')
|
||||
self.node_dir = self.mktemp()
|
||||
os.mkdir(self.node_dir)
|
||||
for cm in [self._reactor, self._twistd]:
|
||||
cm.__enter__()
|
||||
return d
|
||||
|
||||
def tearDown(self):
|
||||
d = super(RunDaemonizeTests, self).tearDown()
|
||||
for cm in [self._reactor, self._twistd]:
|
||||
cm.__exit__(None, None, None)
|
||||
# Note: if you raise an exception (e.g. via self.assertEqual
|
||||
# or raise RuntimeError) it is apparently just ignored and the
|
||||
# test passes anyway...
|
||||
if self._working != os.path.abspath('.'):
|
||||
print("WARNING: a test just changed the working dir; putting it back")
|
||||
os.chdir(self._working)
|
||||
return d
|
||||
|
||||
def _placeholder_nodetype(self, nodetype):
|
||||
fname = join(self.node_dir, '{}.tac'.format(nodetype))
|
||||
with open(fname, 'w') as f:
|
||||
f.write("test placeholder")
|
||||
|
||||
def test_daemonize_defaults(self):
|
||||
self._placeholder_nodetype('introducer')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't much around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'daemonize',
|
||||
])
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
self.assertEqual(0, exit_code[0])
|
||||
|
||||
def test_daemonize_wrong_nodetype(self):
|
||||
self._placeholder_nodetype('invalid')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't much around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'daemonize',
|
||||
])
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
self.assertEqual(0, exit_code[0])
|
||||
|
||||
def test_daemonize_run(self):
|
||||
self._placeholder_nodetype('client')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't much around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'daemonize',
|
||||
])
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
from allmydata.scripts.tahoe_daemonize import daemonize
|
||||
daemonize(config)
|
127
src/allmydata/test/cli/test_run.py
Normal file
127
src/allmydata/test/cli/test_run.py
Normal file
@ -0,0 +1,127 @@
|
||||
"""
|
||||
Tests for ``allmydata.scripts.tahoe_run``.
|
||||
"""
|
||||
|
||||
from six.moves import (
|
||||
StringIO,
|
||||
)
|
||||
|
||||
from testtools.matchers import (
|
||||
Contains,
|
||||
Equals,
|
||||
)
|
||||
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.internet.testing import (
|
||||
MemoryReactor,
|
||||
)
|
||||
from twisted.internet.test.modulehelpers import (
|
||||
AlternateReactor,
|
||||
)
|
||||
|
||||
from ...scripts.tahoe_run import (
|
||||
DaemonizeTheRealService,
|
||||
)
|
||||
|
||||
from ...scripts.runner import (
|
||||
parse_options
|
||||
)
|
||||
from ..common import (
|
||||
SyncTestCase,
|
||||
)
|
||||
|
||||
class DaemonizeTheRealServiceTests(SyncTestCase):
|
||||
"""
|
||||
Tests for ``DaemonizeTheRealService``.
|
||||
"""
|
||||
def _verify_error(self, config, expected):
|
||||
"""
|
||||
Assert that when ``DaemonizeTheRealService`` is started using the given
|
||||
configuration it writes the given message to stderr and stops the
|
||||
reactor.
|
||||
|
||||
:param bytes config: The contents of a ``tahoe.cfg`` file to give to
|
||||
the service.
|
||||
|
||||
:param bytes expected: A string to assert appears in stderr after the
|
||||
service starts.
|
||||
"""
|
||||
nodedir = FilePath(self.mktemp())
|
||||
nodedir.makedirs()
|
||||
nodedir.child("tahoe.cfg").setContent(config)
|
||||
nodedir.child("tahoe-client.tac").touch()
|
||||
|
||||
options = parse_options(["run", nodedir.path])
|
||||
stdout = options.stdout = StringIO()
|
||||
stderr = options.stderr = StringIO()
|
||||
run_options = options.subOptions
|
||||
|
||||
reactor = MemoryReactor()
|
||||
with AlternateReactor(reactor):
|
||||
service = DaemonizeTheRealService(
|
||||
"client",
|
||||
nodedir.path,
|
||||
run_options,
|
||||
)
|
||||
service.startService()
|
||||
|
||||
# We happen to know that the service uses reactor.callWhenRunning
|
||||
# to schedule all its work (though I couldn't tell you *why*).
|
||||
# Make sure those scheduled calls happen.
|
||||
waiting = reactor.whenRunningHooks[:]
|
||||
del reactor.whenRunningHooks[:]
|
||||
for f, a, k in waiting:
|
||||
f(*a, **k)
|
||||
|
||||
self.assertThat(
|
||||
reactor.hasStopped,
|
||||
Equals(True),
|
||||
)
|
||||
|
||||
self.assertThat(
|
||||
stdout.getvalue(),
|
||||
Equals(""),
|
||||
)
|
||||
|
||||
self.assertThat(
|
||||
stderr.getvalue(),
|
||||
Contains(expected),
|
||||
)
|
||||
|
||||
def test_unknown_config(self):
|
||||
"""
|
||||
If there are unknown items in the node configuration file then a short
|
||||
message introduced with ``"Configuration error:"`` is written to
|
||||
stderr.
|
||||
"""
|
||||
self._verify_error("[invalid-section]\n", "Configuration error:")
|
||||
|
||||
def test_port_assignment_required(self):
|
||||
"""
|
||||
If ``tub.port`` is configured to use port 0 then a short message rejecting
|
||||
this configuration is written to stderr.
|
||||
"""
|
||||
self._verify_error(
|
||||
"""
|
||||
[node]
|
||||
tub.port = 0
|
||||
""",
|
||||
"tub.port cannot be 0",
|
||||
)
|
||||
|
||||
def test_privacy_error(self):
|
||||
"""
|
||||
If ``reveal-IP-address`` is set to false and the tub is not configured in
|
||||
a way that avoids revealing the node's IP address, a short message
|
||||
about privacy is written to stderr.
|
||||
"""
|
||||
self._verify_error(
|
||||
"""
|
||||
[node]
|
||||
tub.port = AUTO
|
||||
reveal-IP-address = false
|
||||
""",
|
||||
"Privacy requested",
|
||||
)
|
@ -1,273 +0,0 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from os.path import join
|
||||
from mock import patch
|
||||
from six.moves import StringIO
|
||||
from functools import partial
|
||||
|
||||
from twisted.trial import unittest
|
||||
from allmydata.scripts import runner
|
||||
|
||||
|
||||
#@patch('twisted.internet.reactor')
|
||||
@patch('allmydata.scripts.tahoe_start.subprocess')
|
||||
class RunStartTests(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
d = super(RunStartTests, self).setUp()
|
||||
self.node_dir = self.mktemp()
|
||||
os.mkdir(self.node_dir)
|
||||
return d
|
||||
|
||||
def _placeholder_nodetype(self, nodetype):
|
||||
fname = join(self.node_dir, '{}.tac'.format(nodetype))
|
||||
with open(fname, 'w') as f:
|
||||
f.write("test placeholder")
|
||||
|
||||
def _pid_file(self, pid):
|
||||
fname = join(self.node_dir, 'twistd.pid')
|
||||
with open(fname, 'w') as f:
|
||||
f.write(u"{}\n".format(pid))
|
||||
|
||||
def _logs(self, logs):
|
||||
os.mkdir(join(self.node_dir, 'logs'))
|
||||
fname = join(self.node_dir, 'logs', 'twistd.log')
|
||||
with open(fname, 'w') as f:
|
||||
f.write(logs)
|
||||
|
||||
def test_start_defaults(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
self._pid_file(1234)
|
||||
self._logs('one log\ntwo log\nred log\nblue log\n')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
try:
|
||||
with patch('allmydata.scripts.tahoe_start.os'):
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
def launch(*args, **kw):
|
||||
with open(join(self.node_dir, 'logs', 'twistd.log'), 'a') as f:
|
||||
f.write('client running\n') # "the magic"
|
||||
_subprocess.check_call = launch
|
||||
runner.dispatch(config, i, o, e)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self.assertEqual([0], exit_code)
|
||||
self.assertTrue('Node has started' in o.getvalue())
|
||||
|
||||
def test_start_fails(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
self._logs('existing log line\n')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.tahoe_start.time') as t:
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
thetime = [0]
|
||||
def _time():
|
||||
thetime[0] += 0.1
|
||||
return thetime[0]
|
||||
t.time = _time
|
||||
|
||||
def launch(*args, **kw):
|
||||
with open(join(self.node_dir, 'logs', 'twistd.log'), 'a') as f:
|
||||
f.write('a new log line\n')
|
||||
_subprocess.check_call = launch
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertTrue("a new log line" in o.getvalue())
|
||||
self.assertEqual([1], exit_code)
|
||||
|
||||
def test_start_subprocess_fails(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
self._logs('existing log line\n')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.tahoe_start.time'):
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
# undo patch for the exception-class
|
||||
_subprocess.CalledProcessError = subprocess.CalledProcessError
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
def launch(*args, **kw):
|
||||
raise subprocess.CalledProcessError(42, "tahoe")
|
||||
_subprocess.check_call = launch
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should get our "odd" error-code
|
||||
self.assertEqual([42], exit_code)
|
||||
|
||||
def test_start_help(self, _subprocess):
|
||||
self._placeholder_nodetype('client')
|
||||
|
||||
std = StringIO()
|
||||
with patch('sys.stdout') as stdo:
|
||||
stdo.write = std.write
|
||||
try:
|
||||
runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
'--help',
|
||||
], stdout=std)
|
||||
self.fail("Should get exit")
|
||||
except SystemExit as e:
|
||||
print(e)
|
||||
|
||||
self.assertIn(
|
||||
"Usage:",
|
||||
std.getvalue()
|
||||
)
|
||||
|
||||
def test_start_unknown_node_type(self, _subprocess):
|
||||
self._placeholder_nodetype('bogus')
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertIn(
|
||||
"is not a recognizable node directory",
|
||||
e.getvalue()
|
||||
)
|
||||
self.assertEqual([1], exit_code)
|
||||
|
||||
def test_start_nodedir_not_dir(self, _subprocess):
|
||||
shutil.rmtree(self.node_dir)
|
||||
assert not os.path.isdir(self.node_dir)
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'start',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
with patch('allmydata.scripts.runner.sys') as s:
|
||||
exit_code = [None]
|
||||
def _exit(code):
|
||||
exit_code[0] = code
|
||||
s.exit = _exit
|
||||
|
||||
runner.dispatch(config, i, o, e)
|
||||
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertIn(
|
||||
"does not look like a directory at all",
|
||||
e.getvalue()
|
||||
)
|
||||
self.assertEqual([1], exit_code)
|
||||
|
||||
|
||||
class RunTests(unittest.TestCase):
|
||||
"""
|
||||
Tests confirming end-user behavior of CLI commands
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
d = super(RunTests, self).setUp()
|
||||
self.addCleanup(partial(os.chdir, os.getcwd()))
|
||||
self.node_dir = self.mktemp()
|
||||
os.mkdir(self.node_dir)
|
||||
return d
|
||||
|
||||
@patch('twisted.internet.reactor')
|
||||
def test_run_invalid_config(self, reactor):
|
||||
"""
|
||||
Configuration that's invalid should be obvious to the user
|
||||
"""
|
||||
|
||||
def cwr(fn, *args, **kw):
|
||||
fn()
|
||||
|
||||
def stop(*args, **kw):
|
||||
stopped.append(None)
|
||||
stopped = []
|
||||
reactor.callWhenRunning = cwr
|
||||
reactor.stop = stop
|
||||
|
||||
with open(os.path.join(self.node_dir, "client.tac"), "w") as f:
|
||||
f.write('test')
|
||||
|
||||
with open(os.path.join(self.node_dir, "tahoe.cfg"), "w") as f:
|
||||
f.write(
|
||||
"[invalid section]\n"
|
||||
"foo = bar\n"
|
||||
)
|
||||
|
||||
config = runner.parse_or_exit_with_explanation([
|
||||
# have to do this so the tests don't muck around in
|
||||
# ~/.tahoe (the default)
|
||||
'--node-directory', self.node_dir,
|
||||
'run',
|
||||
])
|
||||
|
||||
i, o, e = StringIO(), StringIO(), StringIO()
|
||||
d = runner.dispatch(config, i, o, e)
|
||||
|
||||
self.assertFailure(d, SystemExit)
|
||||
|
||||
output = e.getvalue()
|
||||
# should print out the collected logs and an error-code
|
||||
self.assertIn(
|
||||
"invalid section",
|
||||
output,
|
||||
)
|
||||
self.assertIn(
|
||||
"Configuration error:",
|
||||
output,
|
||||
)
|
||||
# ensure reactor.stop was actually called
|
||||
self.assertEqual([None], stopped)
|
||||
return d
|
@ -5,7 +5,6 @@ __all__ = [
|
||||
"on_stdout",
|
||||
"on_stdout_and_stderr",
|
||||
"on_different",
|
||||
"wait_for_exit",
|
||||
]
|
||||
|
||||
import os
|
||||
@ -14,8 +13,11 @@ from errno import ENOENT
|
||||
|
||||
import attr
|
||||
|
||||
from eliot import (
|
||||
log_call,
|
||||
)
|
||||
|
||||
from twisted.internet.error import (
|
||||
ProcessDone,
|
||||
ProcessTerminated,
|
||||
ProcessExitedAlready,
|
||||
)
|
||||
@ -25,9 +27,6 @@ from twisted.internet.interfaces import (
|
||||
from twisted.python.filepath import (
|
||||
FilePath,
|
||||
)
|
||||
from twisted.python.runtime import (
|
||||
platform,
|
||||
)
|
||||
from twisted.internet.protocol import (
|
||||
Protocol,
|
||||
ProcessProtocol,
|
||||
@ -42,11 +41,9 @@ from twisted.internet.task import (
|
||||
from ..client import (
|
||||
_Client,
|
||||
)
|
||||
from ..scripts.tahoe_stop import (
|
||||
COULD_NOT_STOP,
|
||||
)
|
||||
from ..util.eliotutil import (
|
||||
inline_callbacks,
|
||||
log_call_deferred,
|
||||
)
|
||||
|
||||
class Expect(Protocol, object):
|
||||
@ -156,6 +153,7 @@ class CLINodeAPI(object):
|
||||
env=os.environ,
|
||||
)
|
||||
|
||||
@log_call(action_type="test:cli-api:run", include_args=["extra_tahoe_args"])
|
||||
def run(self, protocol, extra_tahoe_args=()):
|
||||
"""
|
||||
Start the node running.
|
||||
@ -176,28 +174,21 @@ class CLINodeAPI(object):
|
||||
if ENOENT != e.errno:
|
||||
raise
|
||||
|
||||
def stop(self, protocol):
|
||||
self._execute(
|
||||
protocol,
|
||||
[u"stop", self.basedir.asTextMode().path],
|
||||
)
|
||||
@log_call_deferred(action_type="test:cli-api:stop")
|
||||
def stop(self):
|
||||
return self.stop_and_wait()
|
||||
|
||||
@log_call_deferred(action_type="test:cli-api:stop-and-wait")
|
||||
@inline_callbacks
|
||||
def stop_and_wait(self):
|
||||
if platform.isWindows():
|
||||
# On Windows there is no PID file and no "tahoe stop".
|
||||
if self.process is not None:
|
||||
while True:
|
||||
try:
|
||||
self.process.signalProcess("TERM")
|
||||
except ProcessExitedAlready:
|
||||
break
|
||||
else:
|
||||
yield deferLater(self.reactor, 0.1, lambda: None)
|
||||
else:
|
||||
protocol, ended = wait_for_exit()
|
||||
self.stop(protocol)
|
||||
yield ended
|
||||
if self.process is not None:
|
||||
while True:
|
||||
try:
|
||||
self.process.signalProcess("TERM")
|
||||
except ProcessExitedAlready:
|
||||
break
|
||||
else:
|
||||
yield deferLater(self.reactor, 0.1, lambda: None)
|
||||
|
||||
def active(self):
|
||||
# By writing this file, we get two minutes before the client will
|
||||
@ -208,28 +199,9 @@ class CLINodeAPI(object):
|
||||
def _check_cleanup_reason(self, reason):
|
||||
# Let it fail because the process has already exited.
|
||||
reason.trap(ProcessTerminated)
|
||||
if reason.value.exitCode != COULD_NOT_STOP:
|
||||
return reason
|
||||
return None
|
||||
|
||||
def cleanup(self):
|
||||
stopping = self.stop_and_wait()
|
||||
stopping.addErrback(self._check_cleanup_reason)
|
||||
return stopping
|
||||
|
||||
|
||||
class _WaitForEnd(ProcessProtocol, object):
|
||||
def __init__(self, ended):
|
||||
self._ended = ended
|
||||
|
||||
def processEnded(self, reason):
|
||||
if reason.check(ProcessDone):
|
||||
self._ended.callback(None)
|
||||
else:
|
||||
self._ended.errback(reason)
|
||||
|
||||
|
||||
def wait_for_exit():
|
||||
ended = Deferred()
|
||||
protocol = _WaitForEnd(ended)
|
||||
return protocol, ended
|
||||
|
@ -11,7 +11,7 @@ __all__ = [
|
||||
"skipIf",
|
||||
]
|
||||
|
||||
from past.builtins import chr as byteschr
|
||||
from past.builtins import chr as byteschr, unicode
|
||||
|
||||
import os, random, struct
|
||||
import six
|
||||
@ -64,10 +64,16 @@ from twisted.internet.endpoints import AdoptedStreamServerEndpoint
|
||||
from twisted.trial.unittest import TestCase as _TrialTestCase
|
||||
|
||||
from allmydata import uri
|
||||
from allmydata.interfaces import IMutableFileNode, IImmutableFileNode,\
|
||||
NotEnoughSharesError, ICheckable, \
|
||||
IMutableUploadable, SDMF_VERSION, \
|
||||
MDMF_VERSION
|
||||
from allmydata.interfaces import (
|
||||
IMutableFileNode,
|
||||
IImmutableFileNode,
|
||||
NotEnoughSharesError,
|
||||
ICheckable,
|
||||
IMutableUploadable,
|
||||
SDMF_VERSION,
|
||||
MDMF_VERSION,
|
||||
IAddressFamily,
|
||||
)
|
||||
from allmydata.check_results import CheckResults, CheckAndRepairResults, \
|
||||
DeepCheckResults, DeepCheckAndRepairResults
|
||||
from allmydata.storage_client import StubServer
|
||||
@ -819,13 +825,18 @@ class WebErrorMixin(object):
|
||||
code=None, substring=None, response_substring=None,
|
||||
callable=None, *args, **kwargs):
|
||||
# returns a Deferred with the response body
|
||||
assert substring is None or isinstance(substring, str)
|
||||
if isinstance(substring, bytes):
|
||||
substring = unicode(substring, "ascii")
|
||||
if isinstance(response_substring, unicode):
|
||||
response_substring = response_substring.encode("ascii")
|
||||
assert substring is None or isinstance(substring, unicode)
|
||||
assert response_substring is None or isinstance(response_substring, bytes)
|
||||
assert callable
|
||||
def _validate(f):
|
||||
if code is not None:
|
||||
self.failUnlessEqual(f.value.status, str(code), which)
|
||||
self.failUnlessEqual(f.value.status, b"%d" % code, which)
|
||||
if substring:
|
||||
code_string = str(f)
|
||||
code_string = unicode(f)
|
||||
self.failUnless(substring in code_string,
|
||||
"%s: substring '%s' not in '%s'"
|
||||
% (which, substring, code_string))
|
||||
@ -1147,6 +1158,28 @@ def _corrupt_uri_extension(data, debug=False):
|
||||
return corrupt_field(data, 0x0c+uriextoffset, uriextlen)
|
||||
|
||||
|
||||
|
||||
@attr.s
|
||||
@implementer(IAddressFamily)
|
||||
class ConstantAddresses(object):
|
||||
"""
|
||||
Pretend to provide support for some address family but just hand out
|
||||
canned responses.
|
||||
"""
|
||||
_listener = attr.ib(default=None)
|
||||
_handler = attr.ib(default=None)
|
||||
|
||||
def get_listener(self):
|
||||
if self._listener is None:
|
||||
raise Exception("{!r} has no listener.")
|
||||
return self._listener
|
||||
|
||||
def get_client_endpoint(self):
|
||||
if self._handler is None:
|
||||
raise Exception("{!r} has no client endpoint.")
|
||||
return self._handler
|
||||
|
||||
|
||||
class _TestCaseMixin(object):
|
||||
"""
|
||||
A mixin for ``TestCase`` which collects helpful behaviors for subclasses.
|
||||
|
@ -1,5 +1,9 @@
|
||||
from __future__ import print_function
|
||||
|
||||
from future.utils import PY2, native_str, bchr, binary_type
|
||||
from future.builtins import str as future_str
|
||||
from past.builtins import unicode
|
||||
|
||||
import os
|
||||
import time
|
||||
import signal
|
||||
@ -17,9 +21,6 @@ from twisted.trial import unittest
|
||||
from ..util.assertutil import precondition
|
||||
from ..scripts import runner
|
||||
from allmydata.util.encodingutil import unicode_platform, get_filesystem_encoding, get_io_encoding
|
||||
# Imported for backwards compatibility:
|
||||
from future.utils import bord, bchr, binary_type
|
||||
from past.builtins import unicode
|
||||
|
||||
|
||||
def skip_if_cannot_represent_filename(u):
|
||||
@ -48,24 +49,23 @@ def _getvalue(io):
|
||||
return io.read()
|
||||
|
||||
|
||||
def run_cli_bytes(verb, *args, **kwargs):
|
||||
def run_cli_native(verb, *args, **kwargs):
|
||||
"""
|
||||
Run a Tahoe-LAFS CLI command specified as bytes.
|
||||
Run a Tahoe-LAFS CLI command specified as bytes (on Python 2) or Unicode
|
||||
(on Python 3); basically, it accepts a native string.
|
||||
|
||||
Most code should prefer ``run_cli_unicode`` which deals with all the
|
||||
necessary encoding considerations. This helper still exists so that novel
|
||||
misconfigurations can be explicitly tested (for example, receiving UTF-8
|
||||
bytes when the system encoding claims to be ASCII).
|
||||
necessary encoding considerations.
|
||||
|
||||
:param bytes verb: The command to run. For example, ``b"create-node"``.
|
||||
:param native_str verb: The command to run. For example, ``"create-node"``.
|
||||
|
||||
:param [bytes] args: The arguments to pass to the command. For example,
|
||||
``(b"--hostname=localhost",)``.
|
||||
:param [native_str] args: The arguments to pass to the command. For example,
|
||||
``("--hostname=localhost",)``.
|
||||
|
||||
:param [bytes] nodeargs: Extra arguments to pass to the Tahoe executable
|
||||
:param [native_str] nodeargs: Extra arguments to pass to the Tahoe executable
|
||||
before ``verb``.
|
||||
|
||||
:param bytes stdin: Text to pass to the command via stdin.
|
||||
:param native_str stdin: Text to pass to the command via stdin.
|
||||
|
||||
:param NoneType|str encoding: The name of an encoding which stdout and
|
||||
stderr will be configured to use. ``None`` means stdout and stderr
|
||||
@ -75,8 +75,8 @@ def run_cli_bytes(verb, *args, **kwargs):
|
||||
nodeargs = kwargs.pop("nodeargs", [])
|
||||
encoding = kwargs.pop("encoding", None)
|
||||
precondition(
|
||||
all(isinstance(arg, bytes) for arg in [verb] + nodeargs + list(args)),
|
||||
"arguments to run_cli must be bytes -- convert using unicode_to_argv",
|
||||
all(isinstance(arg, native_str) for arg in [verb] + nodeargs + list(args)),
|
||||
"arguments to run_cli must be a native string -- convert using unicode_to_argv",
|
||||
verb=verb,
|
||||
args=args,
|
||||
nodeargs=nodeargs,
|
||||
@ -139,15 +139,19 @@ def run_cli_unicode(verb, argv, nodeargs=None, stdin=None, encoding=None):
|
||||
if nodeargs is None:
|
||||
nodeargs = []
|
||||
precondition(
|
||||
all(isinstance(arg, unicode) for arg in [verb] + nodeargs + argv),
|
||||
all(isinstance(arg, future_str) for arg in [verb] + nodeargs + argv),
|
||||
"arguments to run_cli_unicode must be unicode",
|
||||
verb=verb,
|
||||
nodeargs=nodeargs,
|
||||
argv=argv,
|
||||
)
|
||||
codec = encoding or "ascii"
|
||||
encode = lambda t: None if t is None else t.encode(codec)
|
||||
d = run_cli_bytes(
|
||||
if PY2:
|
||||
encode = lambda t: None if t is None else t.encode(codec)
|
||||
else:
|
||||
# On Python 3 command-line parsing expects Unicode!
|
||||
encode = lambda t: t
|
||||
d = run_cli_native(
|
||||
encode(verb),
|
||||
nodeargs=list(encode(arg) for arg in nodeargs),
|
||||
stdin=encode(stdin),
|
||||
@ -165,7 +169,7 @@ def run_cli_unicode(verb, argv, nodeargs=None, stdin=None, encoding=None):
|
||||
return d
|
||||
|
||||
|
||||
run_cli = run_cli_bytes
|
||||
run_cli = run_cli_native
|
||||
|
||||
|
||||
def parse_cli(*argv):
|
||||
@ -181,13 +185,12 @@ def insecurerandstr(n):
|
||||
return b''.join(map(bchr, map(randrange, [0]*n, [256]*n)))
|
||||
|
||||
def flip_bit(good, which):
|
||||
# TODO Probs need to update with bchr/bord as with flip_one_bit, below.
|
||||
# flip the low-order bit of good[which]
|
||||
"""Flip the low-order bit of good[which]."""
|
||||
if which == -1:
|
||||
pieces = good[:which], good[-1:], ""
|
||||
pieces = good[:which], good[-1:], b""
|
||||
else:
|
||||
pieces = good[:which], good[which:which+1], good[which+1:]
|
||||
return pieces[0] + chr(ord(pieces[1]) ^ 0x01) + pieces[2]
|
||||
return pieces[0] + bchr(ord(pieces[1]) ^ 0x01) + pieces[2]
|
||||
|
||||
def flip_one_bit(s, offset=0, size=None):
|
||||
""" flip one random bit of the string s, in a byte greater than or equal to offset and less
|
||||
@ -196,7 +199,7 @@ def flip_one_bit(s, offset=0, size=None):
|
||||
if size is None:
|
||||
size=len(s)-offset
|
||||
i = randrange(offset, offset+size)
|
||||
result = s[:i] + bchr(bord(s[i])^(0x01<<randrange(0, 8))) + s[i+1:]
|
||||
result = s[:i] + bchr(ord(s[i:i+1])^(0x01<<randrange(0, 8))) + s[i+1:]
|
||||
assert result != s, "Internal error -- flip_one_bit() produced the same string as its input: %s == %s" % (result, s)
|
||||
return result
|
||||
|
||||
|
@ -47,12 +47,17 @@ class VerboseError(Error):
|
||||
|
||||
@inlineCallbacks
|
||||
def do_http(method, url, **kwargs):
|
||||
"""
|
||||
Run HTTP query, return Deferred of body as bytes.
|
||||
"""
|
||||
response = yield treq.request(method, url, persistent=False, **kwargs)
|
||||
body = yield treq.content(response)
|
||||
# TODO: replace this with response.fail_for_status when
|
||||
# https://github.com/twisted/treq/pull/159 has landed
|
||||
if 400 <= response.code < 600:
|
||||
raise VerboseError(response.code, response=body)
|
||||
raise VerboseError(
|
||||
response.code, response="For request {} to {}, got: {}".format(
|
||||
method, url, body))
|
||||
returnValue(body)
|
||||
|
||||
|
||||
|
@ -31,6 +31,9 @@ from twisted.internet.defer import (
|
||||
maybeDeferred,
|
||||
)
|
||||
|
||||
from ..util.jsonbytes import BytesJSONEncoder
|
||||
|
||||
|
||||
_NAME = Field.for_types(
|
||||
u"name",
|
||||
[str],
|
||||
@ -61,6 +64,14 @@ def eliot_logged_test(f):
|
||||
class storage(object):
|
||||
pass
|
||||
|
||||
|
||||
# On Python 3, we want to use our custom JSON encoder when validating
|
||||
# messages can be encoded to JSON:
|
||||
if PY3:
|
||||
capture = lambda f : capture_logging(None, encoder_=BytesJSONEncoder)(f)
|
||||
else:
|
||||
capture = lambda f : capture_logging(None)(f)
|
||||
|
||||
@wraps(f)
|
||||
def run_and_republish(self, *a, **kw):
|
||||
# Unfortunately the only way to get at the global/default logger...
|
||||
@ -85,7 +96,7 @@ def eliot_logged_test(f):
|
||||
# can finish the test's action.
|
||||
storage.action.finish()
|
||||
|
||||
@capture_logging(None)
|
||||
@capture
|
||||
def run(self, logger):
|
||||
# Record the MemoryLogger for later message extraction.
|
||||
storage.logger = logger
|
||||
@ -165,9 +176,6 @@ class EliotLoggedRunTest(object):
|
||||
|
||||
@eliot_logged_test
|
||||
def run(self, result=None):
|
||||
# Workaround for https://github.com/itamarst/eliot/issues/456
|
||||
if PY3:
|
||||
self.case.eliot_logger._validate_message = lambda *args, **kwargs: None
|
||||
return self._run_tests_with_factory(
|
||||
self.case,
|
||||
self.handlers,
|
||||
|
@ -24,6 +24,7 @@ from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import unicode
|
||||
from six import ensure_text
|
||||
|
||||
import os
|
||||
from base64 import b32encode
|
||||
@ -618,8 +619,7 @@ class GridTestMixin(object):
|
||||
method="GET", clientnum=0, **kwargs):
|
||||
# if return_response=True, this fires with (data, statuscode,
|
||||
# respheaders) instead of just data.
|
||||
assert not isinstance(urlpath, unicode)
|
||||
url = self.client_baseurls[clientnum] + urlpath
|
||||
url = self.client_baseurls[clientnum] + ensure_text(urlpath)
|
||||
|
||||
response = yield treq.request(method, url, persistent=False,
|
||||
allow_redirects=followRedirect,
|
||||
|
@ -144,7 +144,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
|
||||
@staticmethod
|
||||
def remove_tags(html):
|
||||
return BeautifulSoup(html).get_text(separator=" ")
|
||||
return BeautifulSoup(html, 'html5lib').get_text(separator=" ")
|
||||
|
||||
def create_fake_client(self):
|
||||
sb = StorageFarmBroker(True, None, EMPTY_CLIENT_CONFIG)
|
||||
@ -173,7 +173,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
return c
|
||||
|
||||
def render_json(self, resource):
|
||||
return self.successResultOf(render(resource, {"output": ["json"]}))
|
||||
return self.successResultOf(render(resource, {b"output": [b"json"]}))
|
||||
|
||||
def render_element(self, element, args=None):
|
||||
if args is None:
|
||||
@ -186,7 +186,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
html = self.render_element(lcr)
|
||||
self.failUnlessIn(b"Literal files are always healthy", html)
|
||||
|
||||
html = self.render_element(lcr, args={"return_to": ["FOOURL"]})
|
||||
html = self.render_element(lcr, args={b"return_to": [b"FOOURL"]})
|
||||
self.failUnlessIn(b"Literal files are always healthy", html)
|
||||
self.failUnlessIn(b'<a href="FOOURL">Return to file.</a>', html)
|
||||
|
||||
@ -269,7 +269,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
self.failUnlessIn("File Check Results for SI=2k6avp", s) # abbreviated
|
||||
self.failUnlessIn("Not Recoverable! : rather dead", s)
|
||||
|
||||
html = self.render_element(w, args={"return_to": ["FOOURL"]})
|
||||
html = self.render_element(w, args={b"return_to": [b"FOOURL"]})
|
||||
self.failUnlessIn(b'<a href="FOOURL">Return to file/directory.</a>',
|
||||
html)
|
||||
|
||||
|
@ -1,149 +1,69 @@
|
||||
import os
|
||||
import mock
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.internet import reactor, endpoints, defer
|
||||
from twisted.internet.interfaces import IStreamClientEndpoint
|
||||
from twisted.internet import reactor
|
||||
|
||||
from foolscap.connections import tcp
|
||||
|
||||
from testtools.matchers import (
|
||||
MatchesDict,
|
||||
IsInstance,
|
||||
Equals,
|
||||
)
|
||||
|
||||
from ..node import PrivacyError, config_from_string
|
||||
from ..node import create_connection_handlers
|
||||
from ..node import create_main_tub, _tub_portlocation
|
||||
from ..node import create_main_tub
|
||||
from ..util.i2p_provider import create as create_i2p_provider
|
||||
from ..util.tor_provider import create as create_tor_provider
|
||||
|
||||
from .common import (
|
||||
SyncTestCase,
|
||||
ConstantAddresses,
|
||||
)
|
||||
|
||||
|
||||
BASECONFIG = ""
|
||||
|
||||
|
||||
class TCP(unittest.TestCase):
|
||||
|
||||
def test_default(self):
|
||||
class CreateConnectionHandlersTests(SyncTestCase):
|
||||
"""
|
||||
Tests for the Foolscap connection handlers return by
|
||||
``create_connection_handlers``.
|
||||
"""
|
||||
def test_foolscap_handlers(self):
|
||||
"""
|
||||
``create_connection_handlers`` returns a Foolscap connection handlers
|
||||
dictionary mapping ``"tcp"`` to
|
||||
``foolscap.connections.tcp.DefaultTCP``, ``"tor"`` to the supplied Tor
|
||||
provider's handler, and ``"i2p"`` to the supplied I2P provider's
|
||||
handler.
|
||||
"""
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG,
|
||||
)
|
||||
_, foolscap_handlers = create_connection_handlers(None, config, mock.Mock(), mock.Mock())
|
||||
self.assertIsInstance(
|
||||
foolscap_handlers['tcp'],
|
||||
tcp.DefaultTCP,
|
||||
tor_endpoint = object()
|
||||
tor = ConstantAddresses(handler=tor_endpoint)
|
||||
i2p_endpoint = object()
|
||||
i2p = ConstantAddresses(handler=i2p_endpoint)
|
||||
_, foolscap_handlers = create_connection_handlers(
|
||||
config,
|
||||
i2p,
|
||||
tor,
|
||||
)
|
||||
self.assertThat(
|
||||
foolscap_handlers,
|
||||
MatchesDict({
|
||||
"tcp": IsInstance(tcp.DefaultTCP),
|
||||
"i2p": Equals(i2p_endpoint),
|
||||
"tor": Equals(tor_endpoint),
|
||||
}),
|
||||
)
|
||||
|
||||
|
||||
class Tor(unittest.TestCase):
|
||||
|
||||
def test_disabled(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\nenabled = false\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_unimportable(self):
|
||||
with mock.patch("allmydata.util.tor_provider._import_tor",
|
||||
return_value=None):
|
||||
config = config_from_string("fake.port", "no-basedir", BASECONFIG)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_default(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.default_socks",
|
||||
return_value=h1) as f:
|
||||
|
||||
config = config_from_string("fake.port", "no-basedir", BASECONFIG)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(f.mock_calls, [mock.call()])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def _do_test_launch(self, executable):
|
||||
# the handler is created right away
|
||||
config = BASECONFIG+"[tor]\nlaunch = true\n"
|
||||
if executable:
|
||||
config += "tor.executable = %s\n" % executable
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.control_endpoint_maker",
|
||||
return_value=h1) as f:
|
||||
|
||||
config = config_from_string("fake.port", ".", config)
|
||||
tp = create_tor_provider("reactor", config)
|
||||
h = tp.get_tor_handler()
|
||||
|
||||
private_dir = config.get_config_path("private")
|
||||
exp = mock.call(tp._make_control_endpoint,
|
||||
takes_status=True)
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
# later, when Foolscap first connects, Tor should be launched
|
||||
reactor = "reactor"
|
||||
tcp = object()
|
||||
tcep = object()
|
||||
launch_tor = mock.Mock(return_value=defer.succeed(("ep_desc", tcp)))
|
||||
cfs = mock.Mock(return_value=tcep)
|
||||
with mock.patch("allmydata.util.tor_provider._launch_tor", launch_tor):
|
||||
with mock.patch("allmydata.util.tor_provider.clientFromString", cfs):
|
||||
d = tp._make_control_endpoint(reactor,
|
||||
update_status=lambda status: None)
|
||||
cep = self.successResultOf(d)
|
||||
launch_tor.assert_called_with(reactor, executable,
|
||||
os.path.abspath(private_dir),
|
||||
tp._txtorcon)
|
||||
cfs.assert_called_with(reactor, "ep_desc")
|
||||
self.assertIs(cep, tcep)
|
||||
|
||||
def test_launch(self):
|
||||
self._do_test_launch(None)
|
||||
|
||||
def test_launch_executable(self):
|
||||
self._do_test_launch("/special/tor")
|
||||
|
||||
def test_socksport_unix_endpoint(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.socks_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\nsocks.port = unix:/var/lib/fw-daemon/tor_socks.socket\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertTrue(IStreamClientEndpoint.providedBy(f.mock_calls[0][1][0]))
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_socksport_endpoint(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.socks_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\nsocks.port = tcp:127.0.0.1:1234\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertTrue(IStreamClientEndpoint.providedBy(f.mock_calls[0][1][0]))
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_socksport_endpoint_otherhost(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.socks_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"no-basedir",
|
||||
"fake.port",
|
||||
BASECONFIG + "[tor]\nsocks.port = tcp:otherhost:1234\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertTrue(IStreamClientEndpoint.providedBy(f.mock_calls[0][1][0]))
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_socksport_bad_endpoint(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
@ -176,73 +96,8 @@ class Tor(unittest.TestCase):
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_controlport(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.control_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\ncontrol.port = tcp:localhost:1234\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(len(f.mock_calls), 1)
|
||||
ep = f.mock_calls[0][1][0]
|
||||
self.assertIsInstance(ep, endpoints.TCP4ClientEndpoint)
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
class I2P(unittest.TestCase):
|
||||
|
||||
def test_disabled(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nenabled = false\n",
|
||||
)
|
||||
i2p_provider = create_i2p_provider(None, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_unimportable(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG,
|
||||
)
|
||||
with mock.patch("allmydata.util.i2p_provider._import_i2p",
|
||||
return_value=None):
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_default(self):
|
||||
config = config_from_string("fake.port", "no-basedir", BASECONFIG)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.default",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
self.assertEqual(f.mock_calls, [mock.call(reactor, keyfile=None)])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_samport(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nsam.port = tcp:localhost:1234\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.sam_endpoint",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
|
||||
self.assertEqual(len(f.mock_calls), 1)
|
||||
ep = f.mock_calls[0][1][0]
|
||||
self.assertIsInstance(ep, endpoints.TCP4ClientEndpoint)
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_samport_and_launch(self):
|
||||
config = config_from_string(
|
||||
"no-basedir",
|
||||
@ -258,82 +113,6 @@ class I2P(unittest.TestCase):
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_launch(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir=None, i2p_binary=None)
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_launch_executable(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n" + "i2p.executable = i2p\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir=None, i2p_binary="i2p")
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_launch_configdir(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n" + "i2p.configdir = cfg\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir="cfg", i2p_binary=None)
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_launch_configdir_and_executable(self):
|
||||
config = config_from_string(
|
||||
"no-basedir",
|
||||
"fake.port",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n" +
|
||||
"i2p.executable = i2p\n" + "i2p.configdir = cfg\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir="cfg", i2p_binary="i2p")
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_configdir(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\ni2p.configdir = cfg\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.local_i2p",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(None, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
|
||||
self.assertEqual(f.mock_calls, [mock.call("cfg")])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
class Connections(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
@ -341,7 +120,11 @@ class Connections(unittest.TestCase):
|
||||
self.config = config_from_string("fake.port", self.basedir, BASECONFIG)
|
||||
|
||||
def test_default(self):
|
||||
default_connection_handlers, _ = create_connection_handlers(None, self.config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
self.config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertEqual(default_connection_handlers["tcp"], "tcp")
|
||||
self.assertEqual(default_connection_handlers["tor"], "tor")
|
||||
self.assertEqual(default_connection_handlers["i2p"], "i2p")
|
||||
@ -352,23 +135,39 @@ class Connections(unittest.TestCase):
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = tor\n",
|
||||
)
|
||||
default_connection_handlers, _ = create_connection_handlers(None, config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
|
||||
self.assertEqual(default_connection_handlers["tcp"], "tor")
|
||||
self.assertEqual(default_connection_handlers["tor"], "tor")
|
||||
self.assertEqual(default_connection_handlers["i2p"], "i2p")
|
||||
|
||||
def test_tor_unimportable(self):
|
||||
with mock.patch("allmydata.util.tor_provider._import_tor",
|
||||
return_value=None):
|
||||
self.config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = tor\n",
|
||||
"""
|
||||
If the configuration calls for substituting Tor for TCP and
|
||||
``foolscap.connections.tor`` is not importable then
|
||||
``create_connection_handlers`` raises ``ValueError`` with a message
|
||||
explaining this makes Tor unusable.
|
||||
"""
|
||||
self.config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = tor\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(
|
||||
reactor,
|
||||
self.config,
|
||||
import_tor=lambda: None,
|
||||
)
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
self.config,
|
||||
i2p_provider=ConstantAddresses(handler=object()),
|
||||
tor_provider=tor_provider,
|
||||
)
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
tor_provider = create_tor_provider(reactor, self.config)
|
||||
default_connection_handlers, _ = create_connection_handlers(None, self.config, mock.Mock(), tor_provider)
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"'tahoe.cfg [connections] tcp='"
|
||||
@ -383,7 +182,11 @@ class Connections(unittest.TestCase):
|
||||
BASECONFIG + "[connections]\ntcp = unknown\n",
|
||||
)
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
create_connection_handlers(None, config, mock.Mock(), mock.Mock())
|
||||
create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertIn("'tahoe.cfg [connections] tcp='", str(ctx.exception))
|
||||
self.assertIn("uses unknown handler type 'unknown'", str(ctx.exception))
|
||||
|
||||
@ -393,7 +196,11 @@ class Connections(unittest.TestCase):
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = disabled\n",
|
||||
)
|
||||
default_connection_handlers, _ = create_connection_handlers(None, config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertEqual(default_connection_handlers["tcp"], None)
|
||||
self.assertEqual(default_connection_handlers["tor"], "tor")
|
||||
self.assertEqual(default_connection_handlers["i2p"], "i2p")
|
||||
@ -408,11 +215,16 @@ class Privacy(unittest.TestCase):
|
||||
)
|
||||
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
create_connection_handlers(None, config, mock.Mock(), mock.Mock())
|
||||
create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tcp = tcp, must be set to 'tor' or 'disabled'",
|
||||
"Privacy requested with `reveal-IP-address = false` "
|
||||
"but `tcp = tcp` conflicts with this.",
|
||||
)
|
||||
|
||||
def test_connections_tcp_disabled(self):
|
||||
@ -422,7 +234,11 @@ class Privacy(unittest.TestCase):
|
||||
BASECONFIG + "[connections]\ntcp = disabled\n" +
|
||||
"[node]\nreveal-IP-address = false\n",
|
||||
)
|
||||
default_connection_handlers, _ = create_connection_handlers(None, config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertEqual(default_connection_handlers["tcp"], None)
|
||||
|
||||
def test_tub_location_auto(self):
|
||||
@ -433,36 +249,15 @@ class Privacy(unittest.TestCase):
|
||||
)
|
||||
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
create_main_tub(config, {}, {}, {}, mock.Mock(), mock.Mock())
|
||||
create_main_tub(
|
||||
config,
|
||||
tub_options={},
|
||||
default_connection_handlers={},
|
||||
foolscap_connection_handlers={},
|
||||
i2p_provider=ConstantAddresses(),
|
||||
tor_provider=ConstantAddresses(),
|
||||
)
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tub.location uses AUTO",
|
||||
)
|
||||
|
||||
def test_tub_location_tcp(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[node]\nreveal-IP-address = false\ntub.location=tcp:hostname:1234\n",
|
||||
)
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
_tub_portlocation(config)
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tub.location includes tcp: hint",
|
||||
)
|
||||
|
||||
def test_tub_location_legacy_tcp(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[node]\nreveal-IP-address = false\ntub.location=hostname:1234\n",
|
||||
)
|
||||
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
_tub_portlocation(config)
|
||||
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tub.location includes tcp: hint",
|
||||
)
|
||||
|
@ -1,5 +1,19 @@
|
||||
"""Tests for the dirnode module."""
|
||||
import six
|
||||
"""Tests for the dirnode module.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from past.builtins import long
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
# Skip list() since it results in spurious test failures
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, object, range, str, max, min # noqa: F401
|
||||
|
||||
import time
|
||||
import unicodedata
|
||||
from zope.interface import implementer
|
||||
@ -31,9 +45,6 @@ import allmydata.test.common_util as testutil
|
||||
from hypothesis import given
|
||||
from hypothesis.strategies import text
|
||||
|
||||
if six.PY3:
|
||||
long = int
|
||||
|
||||
|
||||
@implementer(IConsumer)
|
||||
class MemAccum(object):
|
||||
@ -48,16 +59,16 @@ class MemAccum(object):
|
||||
self.data = data
|
||||
self.producer.resumeProducing()
|
||||
|
||||
setup_py_uri = "URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861"
|
||||
one_uri = "URI:LIT:n5xgk" # LIT for "one"
|
||||
mut_write_uri = "URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mdmf_write_uri = "URI:MDMF:x533rhbm6kiehzl5kj3s44n5ie:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
|
||||
empty_litdir_uri = "URI:DIR2-LIT:"
|
||||
tiny_litdir_uri = "URI:DIR2-LIT:gqytunj2onug64tufqzdcosvkjetutcjkq5gw4tvm5vwszdgnz5hgyzufqydulbshj5x2lbm" # contains one child which is itself also LIT
|
||||
mut_read_uri = "URI:SSK-RO:jf6wkflosyvntwxqcdo7a54jvm:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mdmf_read_uri = "URI:MDMF-RO:d4cydxselputycfzkw6qgz4zv4:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
|
||||
future_write_uri = "x-tahoe-crazy://I_am_from_the_future."
|
||||
future_read_uri = "x-tahoe-crazy-readonly://I_am_from_the_future."
|
||||
setup_py_uri = b"URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861"
|
||||
one_uri = b"URI:LIT:n5xgk" # LIT for "one"
|
||||
mut_write_uri = b"URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mdmf_write_uri = b"URI:MDMF:x533rhbm6kiehzl5kj3s44n5ie:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
|
||||
empty_litdir_uri = b"URI:DIR2-LIT:"
|
||||
tiny_litdir_uri = b"URI:DIR2-LIT:gqytunj2onug64tufqzdcosvkjetutcjkq5gw4tvm5vwszdgnz5hgyzufqydulbshj5x2lbm" # contains one child which is itself also LIT
|
||||
mut_read_uri = b"URI:SSK-RO:jf6wkflosyvntwxqcdo7a54jvm:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mdmf_read_uri = b"URI:MDMF-RO:d4cydxselputycfzkw6qgz4zv4:4gif5rhneyd763ouo5qjrgnsoa3bg43xycy4robj2rf3tvmhdl3a"
|
||||
future_write_uri = b"x-tahoe-crazy://I_am_from_the_future."
|
||||
future_read_uri = b"x-tahoe-crazy-readonly://I_am_from_the_future."
|
||||
future_nonascii_write_uri = u"x-tahoe-even-more-crazy://I_am_from_the_future_rw_\u263A".encode('utf-8')
|
||||
future_nonascii_read_uri = u"x-tahoe-even-more-crazy-readonly://I_am_from_the_future_ro_\u263A".encode('utf-8')
|
||||
|
||||
@ -95,13 +106,13 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
self.failUnless(u)
|
||||
cap_formats = []
|
||||
if mdmf:
|
||||
cap_formats = ["URI:DIR2-MDMF:",
|
||||
"URI:DIR2-MDMF-RO:",
|
||||
"URI:DIR2-MDMF-Verifier:"]
|
||||
cap_formats = [b"URI:DIR2-MDMF:",
|
||||
b"URI:DIR2-MDMF-RO:",
|
||||
b"URI:DIR2-MDMF-Verifier:"]
|
||||
else:
|
||||
cap_formats = ["URI:DIR2:",
|
||||
"URI:DIR2-RO",
|
||||
"URI:DIR2-Verifier:"]
|
||||
cap_formats = [b"URI:DIR2:",
|
||||
b"URI:DIR2-RO",
|
||||
b"URI:DIR2-Verifier:"]
|
||||
rw, ro, v = cap_formats
|
||||
self.failUnless(u.startswith(rw), u)
|
||||
u_ro = n.get_readonly_uri()
|
||||
@ -149,7 +160,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
|
||||
self.subdir = subdir
|
||||
new_v = subdir.get_verify_cap().to_string()
|
||||
assert isinstance(new_v, str)
|
||||
assert isinstance(new_v, bytes)
|
||||
self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
|
||||
self.expected_verifycaps.add(new_v)
|
||||
si = subdir.get_storage_index()
|
||||
@ -182,7 +193,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
"largest-directory-children": 2,
|
||||
"largest-immutable-file": 0,
|
||||
}
|
||||
for k,v in expected.iteritems():
|
||||
for k,v in expected.items():
|
||||
self.failUnlessReallyEqual(stats[k], v,
|
||||
"stats[%s] was %s, not %s" %
|
||||
(k, stats[k], v))
|
||||
@ -272,8 +283,8 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
{ 'tahoe': {'linkcrtime': "bogus"}}))
|
||||
d.addCallback(lambda res: n.get_metadata_for(u"c2"))
|
||||
def _has_good_linkcrtime(metadata):
|
||||
self.failUnless(metadata.has_key('tahoe'))
|
||||
self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
|
||||
self.failUnless('tahoe' in metadata)
|
||||
self.failUnless('linkcrtime' in metadata['tahoe'])
|
||||
self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
|
||||
d.addCallback(_has_good_linkcrtime)
|
||||
|
||||
@ -423,7 +434,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
# moved on to stdlib "json" which doesn't have it either.
|
||||
d.addCallback(self.stall, 0.1)
|
||||
d.addCallback(lambda res: n.add_file(u"timestamps",
|
||||
upload.Data("stamp me", convergence="some convergence string")))
|
||||
upload.Data(b"stamp me", convergence=b"some convergence string")))
|
||||
d.addCallback(self.stall, 0.1)
|
||||
def _stop(res):
|
||||
self._stop_timestamp = time.time()
|
||||
@ -472,11 +483,11 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
self.failUnlessReallyEqual(set(children.keys()),
|
||||
set([u"child"])))
|
||||
|
||||
uploadable1 = upload.Data("some data", convergence="converge")
|
||||
uploadable1 = upload.Data(b"some data", convergence=b"converge")
|
||||
d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
|
||||
d.addCallback(lambda newnode:
|
||||
self.failUnless(IImmutableFileNode.providedBy(newnode)))
|
||||
uploadable2 = upload.Data("some data", convergence="stuff")
|
||||
uploadable2 = upload.Data(b"some data", convergence=b"stuff")
|
||||
d.addCallback(lambda res:
|
||||
self.shouldFail(ExistingChildError, "add_file-no",
|
||||
"child 'newfile' already exists",
|
||||
@ -491,7 +502,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
d.addCallback(lambda metadata:
|
||||
self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
|
||||
|
||||
uploadable3 = upload.Data("some data", convergence="converge")
|
||||
uploadable3 = upload.Data(b"some data", convergence=b"converge")
|
||||
d.addCallback(lambda res: n.add_file(u"newfile-metadata",
|
||||
uploadable3,
|
||||
{"key": "value"}))
|
||||
@ -507,8 +518,8 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
def _created2(subdir2):
|
||||
self.subdir2 = subdir2
|
||||
# put something in the way, to make sure it gets overwritten
|
||||
return subdir2.add_file(u"child", upload.Data("overwrite me",
|
||||
"converge"))
|
||||
return subdir2.add_file(u"child", upload.Data(b"overwrite me",
|
||||
b"converge"))
|
||||
d.addCallback(_created2)
|
||||
|
||||
d.addCallback(lambda res:
|
||||
@ -666,22 +677,22 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
|
||||
self.failUnless(fut_node.is_unknown())
|
||||
self.failUnlessReallyEqual(fut_node.get_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), b"ro." + future_read_uri)
|
||||
self.failUnless(isinstance(fut_metadata, dict), fut_metadata)
|
||||
|
||||
self.failUnless(futna_node.is_unknown())
|
||||
self.failUnlessReallyEqual(futna_node.get_uri(), future_nonascii_write_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), "ro." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), b"ro." + future_nonascii_read_uri)
|
||||
self.failUnless(isinstance(futna_metadata, dict), futna_metadata)
|
||||
|
||||
self.failUnless(fro_node.is_unknown())
|
||||
self.failUnlessReallyEqual(fro_node.get_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fro_node.get_uri(), b"ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), b"ro." + future_read_uri)
|
||||
self.failUnless(isinstance(fro_metadata, dict), fro_metadata)
|
||||
|
||||
self.failUnless(frona_node.is_unknown())
|
||||
self.failUnlessReallyEqual(frona_node.get_uri(), "ro." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), "ro." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(frona_node.get_uri(), b"ro." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), b"ro." + future_nonascii_read_uri)
|
||||
self.failUnless(isinstance(frona_metadata, dict), frona_metadata)
|
||||
|
||||
self.failIf(emptylit_node.is_unknown())
|
||||
@ -697,7 +708,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
set([u"short"])))
|
||||
d2.addCallback(lambda ignored: tinylit_node.list())
|
||||
d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
|
||||
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
|
||||
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, b"The end."))
|
||||
return d2
|
||||
d.addCallback(_check_kids)
|
||||
|
||||
@ -782,7 +793,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
rep = str(dn)
|
||||
self.failUnless("RO-IMM" in rep)
|
||||
cap = dn.get_cap()
|
||||
self.failUnlessIn("CHK", cap.to_string())
|
||||
self.failUnlessIn(b"CHK", cap.to_string())
|
||||
self.cap = cap
|
||||
return dn.list()
|
||||
d.addCallback(_created)
|
||||
@ -808,13 +819,13 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
self.failUnlessEqual(two_metadata["metakey"], "metavalue")
|
||||
|
||||
self.failUnless(fut_node.is_unknown())
|
||||
self.failUnlessReallyEqual(fut_node.get_uri(), "imm." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), "imm." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_uri(), b"imm." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fut_node.get_readonly_uri(), b"imm." + future_read_uri)
|
||||
self.failUnless(isinstance(fut_metadata, dict), fut_metadata)
|
||||
|
||||
self.failUnless(futna_node.is_unknown())
|
||||
self.failUnlessReallyEqual(futna_node.get_uri(), "imm." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), "imm." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_uri(), b"imm." + future_nonascii_read_uri)
|
||||
self.failUnlessReallyEqual(futna_node.get_readonly_uri(), b"imm." + future_nonascii_read_uri)
|
||||
self.failUnless(isinstance(futna_metadata, dict), futna_metadata)
|
||||
|
||||
self.failIf(emptylit_node.is_unknown())
|
||||
@ -830,7 +841,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
set([u"short"])))
|
||||
d2.addCallback(lambda ignored: tinylit_node.list())
|
||||
d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
|
||||
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
|
||||
d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, b"The end."))
|
||||
return d2
|
||||
|
||||
d.addCallback(_check_kids)
|
||||
@ -894,8 +905,8 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
rep = str(dn)
|
||||
self.failUnless("RO-IMM" in rep)
|
||||
cap = dn.get_cap()
|
||||
self.failUnlessIn("LIT", cap.to_string())
|
||||
self.failUnlessReallyEqual(cap.to_string(), "URI:DIR2-LIT:")
|
||||
self.failUnlessIn(b"LIT", cap.to_string())
|
||||
self.failUnlessReallyEqual(cap.to_string(), b"URI:DIR2-LIT:")
|
||||
self.cap = cap
|
||||
return dn.list()
|
||||
d.addCallback(_created_empty)
|
||||
@ -912,13 +923,13 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
rep = str(dn)
|
||||
self.failUnless("RO-IMM" in rep)
|
||||
cap = dn.get_cap()
|
||||
self.failUnlessIn("LIT", cap.to_string())
|
||||
self.failUnlessIn(b"LIT", cap.to_string())
|
||||
self.failUnlessReallyEqual(cap.to_string(),
|
||||
"URI:DIR2-LIT:gi4tumj2n4wdcmz2kvjesosmjfkdu3rvpbtwwlbqhiwdeot3puwcy")
|
||||
b"URI:DIR2-LIT:gi4tumj2n4wdcmz2kvjesosmjfkdu3rvpbtwwlbqhiwdeot3puwcy")
|
||||
self.cap = cap
|
||||
return dn.list()
|
||||
d.addCallback(_created_small)
|
||||
d.addCallback(lambda kids: self.failUnlessReallyEqual(kids.keys(), [u"o"]))
|
||||
d.addCallback(lambda kids: self.failUnlessReallyEqual(list(kids.keys()), [u"o"]))
|
||||
|
||||
# now test n.create_subdirectory(mutable=False)
|
||||
d.addCallback(lambda ign: c.create_dirnode())
|
||||
@ -928,7 +939,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
d.addCallback(_check_kids)
|
||||
d.addCallback(lambda ign: n.list())
|
||||
d.addCallback(lambda children:
|
||||
self.failUnlessReallyEqual(children.keys(), [u"subdir"]))
|
||||
self.failUnlessReallyEqual(list(children.keys()), [u"subdir"]))
|
||||
d.addCallback(lambda ign: n.get(u"subdir"))
|
||||
d.addCallback(lambda sd: sd.list())
|
||||
d.addCallback(_check_kids)
|
||||
@ -962,14 +973,14 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
# It also tests that we store child names as UTF-8 NFC, and normalize
|
||||
# them again when retrieving them.
|
||||
|
||||
stripped_write_uri = "lafs://from_the_future\t"
|
||||
stripped_read_uri = "lafs://readonly_from_the_future\t"
|
||||
spacedout_write_uri = stripped_write_uri + " "
|
||||
spacedout_read_uri = stripped_read_uri + " "
|
||||
stripped_write_uri = b"lafs://from_the_future\t"
|
||||
stripped_read_uri = b"lafs://readonly_from_the_future\t"
|
||||
spacedout_write_uri = stripped_write_uri + b" "
|
||||
spacedout_read_uri = stripped_read_uri + b" "
|
||||
|
||||
child = nm.create_from_cap(spacedout_write_uri, spacedout_read_uri)
|
||||
self.failUnlessReallyEqual(child.get_write_uri(), spacedout_write_uri)
|
||||
self.failUnlessReallyEqual(child.get_readonly_uri(), "ro." + spacedout_read_uri)
|
||||
self.failUnlessReallyEqual(child.get_readonly_uri(), b"ro." + spacedout_read_uri)
|
||||
|
||||
child_dottedi = u"ch\u0131\u0307ld"
|
||||
|
||||
@ -1003,7 +1014,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
self.failUnlessIn(name, kids_out)
|
||||
(expected_child, ign) = kids_out[name]
|
||||
self.failUnlessReallyEqual(rw_uri, expected_child.get_write_uri())
|
||||
self.failUnlessReallyEqual("ro." + ro_uri, expected_child.get_readonly_uri())
|
||||
self.failUnlessReallyEqual(b"ro." + ro_uri, expected_child.get_readonly_uri())
|
||||
numkids += 1
|
||||
|
||||
self.failUnlessReallyEqual(numkids, len(kids_out))
|
||||
@ -1039,7 +1050,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
child_node, child_metadata = children[u"child"]
|
||||
|
||||
self.failUnlessReallyEqual(child_node.get_write_uri(), stripped_write_uri)
|
||||
self.failUnlessReallyEqual(child_node.get_readonly_uri(), "ro." + stripped_read_uri)
|
||||
self.failUnlessReallyEqual(child_node.get_readonly_uri(), b"ro." + stripped_read_uri)
|
||||
d.addCallback(_check_kids)
|
||||
|
||||
d.addCallback(lambda ign: nm.create_from_cap(self.cap.to_string()))
|
||||
@ -1074,7 +1085,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
d.addCallback(_created_root)
|
||||
def _created_subdir(subdir):
|
||||
self._subdir = subdir
|
||||
d = subdir.add_file(u"file1", upload.Data("data"*100, None))
|
||||
d = subdir.add_file(u"file1", upload.Data(b"data"*100, None))
|
||||
d.addCallback(lambda res: subdir.set_node(u"link", self._rootnode))
|
||||
d.addCallback(lambda res: c.create_dirnode())
|
||||
d.addCallback(lambda dn:
|
||||
@ -1250,7 +1261,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
nm = c.nodemaker
|
||||
filecap = make_chk_file_uri(1234)
|
||||
filenode = nm.create_from_cap(filecap)
|
||||
uploadable = upload.Data("some data", convergence="some convergence string")
|
||||
uploadable = upload.Data(b"some data", convergence=b"some convergence string")
|
||||
|
||||
d = c.create_dirnode(version=version)
|
||||
def _created(rw_dn):
|
||||
@ -1386,7 +1397,7 @@ class Dirnode(GridTestMixin, unittest.TestCase,
|
||||
|
||||
class MinimalFakeMutableFile(object):
|
||||
def get_writekey(self):
|
||||
return "writekey"
|
||||
return b"writekey"
|
||||
|
||||
class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
# This is a base32-encoded representation of the directory tree
|
||||
@ -1405,7 +1416,7 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
nodemaker = NodeMaker(None, None, None,
|
||||
None, None,
|
||||
{"k": 3, "n": 10}, None, None)
|
||||
write_uri = "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
write_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
filenode = nodemaker.create_from_cap(write_uri)
|
||||
node = dirnode.DirectoryNode(filenode, nodemaker, None)
|
||||
children = node._unpack_contents(known_tree)
|
||||
@ -1417,13 +1428,13 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
|
||||
def _check_children(self, children):
|
||||
# Are all the expected child nodes there?
|
||||
self.failUnless(children.has_key(u'file1'))
|
||||
self.failUnless(children.has_key(u'file2'))
|
||||
self.failUnless(children.has_key(u'file3'))
|
||||
self.failUnless(u'file1' in children)
|
||||
self.failUnless(u'file2' in children)
|
||||
self.failUnless(u'file3' in children)
|
||||
|
||||
# Are the metadata for child 3 right?
|
||||
file3_rocap = "URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
|
||||
file3_rwcap = "URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
|
||||
file3_rocap = b"URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
|
||||
file3_rwcap = b"URI:CHK:cmtcxq7hwxvfxan34yiev6ivhy:qvcekmjtoetdcw4kmi7b3rtblvgx7544crnwaqtiewemdliqsokq:3:10:5"
|
||||
file3_metadata = {'ctime': 1246663897.4336269, 'tahoe': {'linkmotime': 1246663897.4336269, 'linkcrtime': 1246663897.4336269}, 'mtime': 1246663897.4336269}
|
||||
self.failUnlessEqual(file3_metadata, children[u'file3'][1])
|
||||
self.failUnlessReallyEqual(file3_rocap,
|
||||
@ -1432,8 +1443,8 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
children[u'file3'][0].get_uri())
|
||||
|
||||
# Are the metadata for child 2 right?
|
||||
file2_rocap = "URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
|
||||
file2_rwcap = "URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
|
||||
file2_rocap = b"URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
|
||||
file2_rwcap = b"URI:CHK:apegrpehshwugkbh3jlt5ei6hq:5oougnemcl5xgx4ijgiumtdojlipibctjkbwvyygdymdphib2fvq:3:10:4"
|
||||
file2_metadata = {'ctime': 1246663897.430218, 'tahoe': {'linkmotime': 1246663897.430218, 'linkcrtime': 1246663897.430218}, 'mtime': 1246663897.430218}
|
||||
self.failUnlessEqual(file2_metadata, children[u'file2'][1])
|
||||
self.failUnlessReallyEqual(file2_rocap,
|
||||
@ -1442,8 +1453,8 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
children[u'file2'][0].get_uri())
|
||||
|
||||
# Are the metadata for child 1 right?
|
||||
file1_rocap = "URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
|
||||
file1_rwcap = "URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
|
||||
file1_rocap = b"URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
|
||||
file1_rwcap = b"URI:CHK:olxtimympo7f27jvhtgqlnbtn4:emzdnhk2um4seixozlkw3qx2nfijvdkx3ky7i7izl47yedl6e64a:3:10:10"
|
||||
file1_metadata = {'ctime': 1246663897.4275661, 'tahoe': {'linkmotime': 1246663897.4275661, 'linkcrtime': 1246663897.4275661}, 'mtime': 1246663897.4275661}
|
||||
self.failUnlessEqual(file1_metadata, children[u'file1'][1])
|
||||
self.failUnlessReallyEqual(file1_rocap,
|
||||
@ -1452,18 +1463,42 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
children[u'file1'][0].get_uri())
|
||||
|
||||
def _make_kids(self, nm, which):
|
||||
caps = {"imm": "URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861",
|
||||
"lit": "URI:LIT:n5xgk", # LIT for "one"
|
||||
"write": "URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq",
|
||||
"read": "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q",
|
||||
"dirwrite": "URI:DIR2:n6x24zd3seu725yluj75q5boaa:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
|
||||
"dirread": "URI:DIR2-RO:b7sr5qsifnicca7cbk3rhrhbvq:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
|
||||
caps = {"imm": b"URI:CHK:n7r3m6wmomelk4sep3kw5cvduq:os7ijw5c3maek7pg65e5254k2fzjflavtpejjyhshpsxuqzhcwwq:3:20:14861",
|
||||
"lit": b"URI:LIT:n5xgk", # LIT for "one"
|
||||
"write": b"URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq",
|
||||
"read": b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q",
|
||||
"dirwrite": b"URI:DIR2:n6x24zd3seu725yluj75q5boaa:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
|
||||
"dirread": b"URI:DIR2-RO:b7sr5qsifnicca7cbk3rhrhbvq:mm6yoqjhl6ueh7iereldqxue4nene4wl7rqfjfybqrehdqmqskvq",
|
||||
}
|
||||
kids = {}
|
||||
for name in which:
|
||||
kids[unicode(name)] = (nm.create_from_cap(caps[name]), {})
|
||||
kids[str(name)] = (nm.create_from_cap(caps[name]), {})
|
||||
return kids
|
||||
|
||||
def test_pack_unpack_unknown(self):
|
||||
"""
|
||||
Minimal testing for roundtripping unknown URIs.
|
||||
"""
|
||||
nm = NodeMaker(None, None, None, None, None, {"k": 3, "n": 10}, None, None)
|
||||
fn = MinimalFakeMutableFile()
|
||||
# UnknownNode has massively complex rules about when it's an error.
|
||||
# Just force it not to be an error.
|
||||
unknown_rw = UnknownNode(b"whatevs://write", None)
|
||||
unknown_rw.error = None
|
||||
unknown_ro = UnknownNode(None, b"whatevs://readonly")
|
||||
unknown_ro.error = None
|
||||
kids = {
|
||||
"unknown_rw": (unknown_rw, {}),
|
||||
"unknown_ro": (unknown_ro, {})
|
||||
}
|
||||
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=False)
|
||||
|
||||
write_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
filenode = nm.create_from_cap(write_uri)
|
||||
dn = dirnode.DirectoryNode(filenode, nm, None)
|
||||
unkids = dn._unpack_contents(packed)
|
||||
self.assertEqual(kids, unkids)
|
||||
|
||||
@given(text(min_size=1, max_size=20))
|
||||
def test_pack_unpack_unicode_hypothesis(self, name):
|
||||
"""
|
||||
@ -1485,7 +1520,7 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
name: (LiteralFileNode(uri.from_string(one_uri)), {}),
|
||||
}
|
||||
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=False)
|
||||
write_uri = "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
write_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
filenode = nm.create_from_cap(write_uri)
|
||||
dn = dirnode.DirectoryNode(filenode, nm, None)
|
||||
unkids = dn._unpack_contents(packed)
|
||||
@ -1498,11 +1533,11 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
kids = self._make_kids(nm, ["imm", "lit", "write", "read",
|
||||
"dirwrite", "dirread"])
|
||||
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=False)
|
||||
self.failUnlessIn("lit", packed)
|
||||
self.failUnlessIn(b"lit", packed)
|
||||
|
||||
kids = self._make_kids(nm, ["imm", "lit"])
|
||||
packed = dirnode.pack_children(kids, fn.get_writekey(), deep_immutable=True)
|
||||
self.failUnlessIn("lit", packed)
|
||||
self.failUnlessIn(b"lit", packed)
|
||||
|
||||
kids = self._make_kids(nm, ["imm", "lit", "write"])
|
||||
self.failUnlessRaises(dirnode.MustBeDeepImmutableError,
|
||||
@ -1528,22 +1563,22 @@ class Packing(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
@implementer(IMutableFileNode)
|
||||
class FakeMutableFile(object):
|
||||
counter = 0
|
||||
def __init__(self, initial_contents=""):
|
||||
def __init__(self, initial_contents=b""):
|
||||
data = self._get_initial_contents(initial_contents)
|
||||
self.data = data.read(data.get_size())
|
||||
self.data = "".join(self.data)
|
||||
self.data = b"".join(self.data)
|
||||
|
||||
counter = FakeMutableFile.counter
|
||||
FakeMutableFile.counter += 1
|
||||
writekey = hashutil.ssk_writekey_hash(str(counter))
|
||||
fingerprint = hashutil.ssk_pubkey_fingerprint_hash(str(counter))
|
||||
writekey = hashutil.ssk_writekey_hash(b"%d" % counter)
|
||||
fingerprint = hashutil.ssk_pubkey_fingerprint_hash(b"%d" % counter)
|
||||
self.uri = uri.WriteableSSKFileURI(writekey, fingerprint)
|
||||
|
||||
def _get_initial_contents(self, contents):
|
||||
if isinstance(contents, str):
|
||||
if isinstance(contents, bytes):
|
||||
return contents
|
||||
if contents is None:
|
||||
return ""
|
||||
return b""
|
||||
assert callable(contents), "%s should be callable, not %s" % \
|
||||
(contents, type(contents))
|
||||
return contents(self)
|
||||
@ -1561,7 +1596,7 @@ class FakeMutableFile(object):
|
||||
return defer.succeed(self.data)
|
||||
|
||||
def get_writekey(self):
|
||||
return "writekey"
|
||||
return b"writekey"
|
||||
|
||||
def is_readonly(self):
|
||||
return False
|
||||
@ -1584,7 +1619,7 @@ class FakeMutableFile(object):
|
||||
return defer.succeed(None)
|
||||
|
||||
class FakeNodeMaker(NodeMaker):
|
||||
def create_mutable_file(self, contents="", keysize=None, version=None):
|
||||
def create_mutable_file(self, contents=b"", keysize=None, version=None):
|
||||
return defer.succeed(FakeMutableFile(contents))
|
||||
|
||||
class FakeClient2(_Client):
|
||||
@ -1631,9 +1666,9 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
|
||||
# and to add an URI prefixed with "ro." or "imm." when it is given in a
|
||||
# write slot (or URL parameter).
|
||||
d.addCallback(lambda ign: self._node.set_uri(u"add-ro",
|
||||
"ro." + future_read_uri, None))
|
||||
b"ro." + future_read_uri, None))
|
||||
d.addCallback(lambda ign: self._node.set_uri(u"add-imm",
|
||||
"imm." + future_imm_uri, None))
|
||||
b"imm." + future_imm_uri, None))
|
||||
|
||||
d.addCallback(lambda ign: self._node.list())
|
||||
def _check(children):
|
||||
@ -1642,25 +1677,25 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
|
||||
self.failUnless(isinstance(fn, UnknownNode), fn)
|
||||
self.failUnlessReallyEqual(fn.get_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fn.get_write_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fn.get_readonly_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fn.get_readonly_uri(), b"ro." + future_read_uri)
|
||||
|
||||
(fn2, metadata2) = children[u"add-pair"]
|
||||
self.failUnless(isinstance(fn2, UnknownNode), fn2)
|
||||
self.failUnlessReallyEqual(fn2.get_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fn2.get_write_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fn2.get_readonly_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fn2.get_readonly_uri(), b"ro." + future_read_uri)
|
||||
|
||||
(fn3, metadata3) = children[u"add-ro"]
|
||||
self.failUnless(isinstance(fn3, UnknownNode), fn3)
|
||||
self.failUnlessReallyEqual(fn3.get_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fn3.get_uri(), b"ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fn3.get_write_uri(), None)
|
||||
self.failUnlessReallyEqual(fn3.get_readonly_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fn3.get_readonly_uri(), b"ro." + future_read_uri)
|
||||
|
||||
(fn4, metadata4) = children[u"add-imm"]
|
||||
self.failUnless(isinstance(fn4, UnknownNode), fn4)
|
||||
self.failUnlessReallyEqual(fn4.get_uri(), "imm." + future_imm_uri)
|
||||
self.failUnlessReallyEqual(fn4.get_uri(), b"imm." + future_imm_uri)
|
||||
self.failUnlessReallyEqual(fn4.get_write_uri(), None)
|
||||
self.failUnlessReallyEqual(fn4.get_readonly_uri(), "imm." + future_imm_uri)
|
||||
self.failUnlessReallyEqual(fn4.get_readonly_uri(), b"imm." + future_imm_uri)
|
||||
|
||||
# We should also be allowed to copy the "future" UnknownNode, because
|
||||
# it contains all the information that was in the original directory
|
||||
@ -1675,17 +1710,17 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
|
||||
self.failUnless(isinstance(fn, UnknownNode), fn)
|
||||
self.failUnlessReallyEqual(fn.get_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fn.get_write_uri(), future_write_uri)
|
||||
self.failUnlessReallyEqual(fn.get_readonly_uri(), "ro." + future_read_uri)
|
||||
self.failUnlessReallyEqual(fn.get_readonly_uri(), b"ro." + future_read_uri)
|
||||
d.addCallback(_check2)
|
||||
return d
|
||||
|
||||
def test_unknown_strip_prefix_for_ro(self):
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro("foo", False), "foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro("ro.foo", False), "foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro("imm.foo", False), "imm.foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro("foo", True), "foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro("ro.foo", True), "foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro("imm.foo", True), "foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro(b"foo", False), b"foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro(b"ro.foo", False), b"foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro(b"imm.foo", False), b"imm.foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro(b"foo", True), b"foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro(b"ro.foo", True), b"foo")
|
||||
self.failUnlessReallyEqual(strip_prefix_for_ro(b"imm.foo", True), b"foo")
|
||||
|
||||
def test_unknownnode(self):
|
||||
lit_uri = one_uri
|
||||
@ -1697,58 +1732,58 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
|
||||
]
|
||||
unknown_rw = [# These are errors because we're only given a rw_uri, and we can't
|
||||
# diminish it.
|
||||
( 2, UnknownNode("foo", None)),
|
||||
( 3, UnknownNode("foo", None, deep_immutable=True)),
|
||||
( 4, UnknownNode("ro.foo", None, deep_immutable=True)),
|
||||
( 5, UnknownNode("ro." + mut_read_uri, None, deep_immutable=True)),
|
||||
( 5.1, UnknownNode("ro." + mdmf_read_uri, None, deep_immutable=True)),
|
||||
( 6, UnknownNode("URI:SSK-RO:foo", None, deep_immutable=True)),
|
||||
( 7, UnknownNode("URI:SSK:foo", None)),
|
||||
( 2, UnknownNode(b"foo", None)),
|
||||
( 3, UnknownNode(b"foo", None, deep_immutable=True)),
|
||||
( 4, UnknownNode(b"ro.foo", None, deep_immutable=True)),
|
||||
( 5, UnknownNode(b"ro." + mut_read_uri, None, deep_immutable=True)),
|
||||
( 5.1, UnknownNode(b"ro." + mdmf_read_uri, None, deep_immutable=True)),
|
||||
( 6, UnknownNode(b"URI:SSK-RO:foo", None, deep_immutable=True)),
|
||||
( 7, UnknownNode(b"URI:SSK:foo", None)),
|
||||
]
|
||||
must_be_ro = [# These are errors because a readonly constraint is not met.
|
||||
( 8, UnknownNode("ro." + mut_write_uri, None)),
|
||||
( 8.1, UnknownNode("ro." + mdmf_write_uri, None)),
|
||||
( 9, UnknownNode(None, "ro." + mut_write_uri)),
|
||||
( 9.1, UnknownNode(None, "ro." + mdmf_write_uri)),
|
||||
( 8, UnknownNode(b"ro." + mut_write_uri, None)),
|
||||
( 8.1, UnknownNode(b"ro." + mdmf_write_uri, None)),
|
||||
( 9, UnknownNode(None, b"ro." + mut_write_uri)),
|
||||
( 9.1, UnknownNode(None, b"ro." + mdmf_write_uri)),
|
||||
]
|
||||
must_be_imm = [# These are errors because an immutable constraint is not met.
|
||||
(10, UnknownNode(None, "ro.URI:SSK-RO:foo", deep_immutable=True)),
|
||||
(11, UnknownNode(None, "imm.URI:SSK:foo")),
|
||||
(12, UnknownNode(None, "imm.URI:SSK-RO:foo")),
|
||||
(13, UnknownNode("bar", "ro.foo", deep_immutable=True)),
|
||||
(14, UnknownNode("bar", "imm.foo", deep_immutable=True)),
|
||||
(15, UnknownNode("bar", "imm." + lit_uri, deep_immutable=True)),
|
||||
(16, UnknownNode("imm." + mut_write_uri, None)),
|
||||
(16.1, UnknownNode("imm." + mdmf_write_uri, None)),
|
||||
(17, UnknownNode("imm." + mut_read_uri, None)),
|
||||
(17.1, UnknownNode("imm." + mdmf_read_uri, None)),
|
||||
(18, UnknownNode("bar", "imm.foo")),
|
||||
(10, UnknownNode(None, b"ro.URI:SSK-RO:foo", deep_immutable=True)),
|
||||
(11, UnknownNode(None, b"imm.URI:SSK:foo")),
|
||||
(12, UnknownNode(None, b"imm.URI:SSK-RO:foo")),
|
||||
(13, UnknownNode(b"bar", b"ro.foo", deep_immutable=True)),
|
||||
(14, UnknownNode(b"bar", b"imm.foo", deep_immutable=True)),
|
||||
(15, UnknownNode(b"bar", b"imm." + lit_uri, deep_immutable=True)),
|
||||
(16, UnknownNode(b"imm." + mut_write_uri, None)),
|
||||
(16.1, UnknownNode(b"imm." + mdmf_write_uri, None)),
|
||||
(17, UnknownNode(b"imm." + mut_read_uri, None)),
|
||||
(17.1, UnknownNode(b"imm." + mdmf_read_uri, None)),
|
||||
(18, UnknownNode(b"bar", b"imm.foo")),
|
||||
]
|
||||
bad_uri = [# These are errors because the URI is bad once we've stripped the prefix.
|
||||
(19, UnknownNode("ro.URI:SSK-RO:foo", None)),
|
||||
(20, UnknownNode("imm.URI:CHK:foo", None, deep_immutable=True)),
|
||||
(21, UnknownNode(None, "URI:CHK:foo")),
|
||||
(22, UnknownNode(None, "URI:CHK:foo", deep_immutable=True)),
|
||||
(19, UnknownNode(b"ro.URI:SSK-RO:foo", None)),
|
||||
(20, UnknownNode(b"imm.URI:CHK:foo", None, deep_immutable=True)),
|
||||
(21, UnknownNode(None, b"URI:CHK:foo")),
|
||||
(22, UnknownNode(None, b"URI:CHK:foo", deep_immutable=True)),
|
||||
]
|
||||
ro_prefixed = [# These are valid, and the readcap should end up with a ro. prefix.
|
||||
(23, UnknownNode(None, "foo")),
|
||||
(24, UnknownNode(None, "ro.foo")),
|
||||
(25, UnknownNode(None, "ro." + lit_uri)),
|
||||
(26, UnknownNode("bar", "foo")),
|
||||
(27, UnknownNode("bar", "ro.foo")),
|
||||
(28, UnknownNode("bar", "ro." + lit_uri)),
|
||||
(29, UnknownNode("ro.foo", None)),
|
||||
(30, UnknownNode("ro." + lit_uri, None)),
|
||||
(23, UnknownNode(None, b"foo")),
|
||||
(24, UnknownNode(None, b"ro.foo")),
|
||||
(25, UnknownNode(None, b"ro." + lit_uri)),
|
||||
(26, UnknownNode(b"bar", b"foo")),
|
||||
(27, UnknownNode(b"bar", b"ro.foo")),
|
||||
(28, UnknownNode(b"bar", b"ro." + lit_uri)),
|
||||
(29, UnknownNode(b"ro.foo", None)),
|
||||
(30, UnknownNode(b"ro." + lit_uri, None)),
|
||||
]
|
||||
imm_prefixed = [# These are valid, and the readcap should end up with an imm. prefix.
|
||||
(31, UnknownNode(None, "foo", deep_immutable=True)),
|
||||
(32, UnknownNode(None, "ro.foo", deep_immutable=True)),
|
||||
(33, UnknownNode(None, "imm.foo")),
|
||||
(34, UnknownNode(None, "imm.foo", deep_immutable=True)),
|
||||
(35, UnknownNode("imm." + lit_uri, None)),
|
||||
(36, UnknownNode("imm." + lit_uri, None, deep_immutable=True)),
|
||||
(37, UnknownNode(None, "imm." + lit_uri)),
|
||||
(38, UnknownNode(None, "imm." + lit_uri, deep_immutable=True)),
|
||||
(31, UnknownNode(None, b"foo", deep_immutable=True)),
|
||||
(32, UnknownNode(None, b"ro.foo", deep_immutable=True)),
|
||||
(33, UnknownNode(None, b"imm.foo")),
|
||||
(34, UnknownNode(None, b"imm.foo", deep_immutable=True)),
|
||||
(35, UnknownNode(b"imm." + lit_uri, None)),
|
||||
(36, UnknownNode(b"imm." + lit_uri, None, deep_immutable=True)),
|
||||
(37, UnknownNode(None, b"imm." + lit_uri)),
|
||||
(38, UnknownNode(None, b"imm." + lit_uri, deep_immutable=True)),
|
||||
]
|
||||
error = unknown_rw + must_be_ro + must_be_imm + bad_uri
|
||||
ok = ro_prefixed + imm_prefixed
|
||||
@ -1780,10 +1815,10 @@ class Dirnode2(testutil.ReallyEqualMixin, testutil.ShouldFailMixin, unittest.Tes
|
||||
self.failIf(n.get_readonly_uri() is None, i)
|
||||
|
||||
for (i, n) in ro_prefixed:
|
||||
self.failUnless(n.get_readonly_uri().startswith("ro."), i)
|
||||
self.failUnless(n.get_readonly_uri().startswith(b"ro."), i)
|
||||
|
||||
for (i, n) in imm_prefixed:
|
||||
self.failUnless(n.get_readonly_uri().startswith("imm."), i)
|
||||
self.failUnless(n.get_readonly_uri().startswith(b"imm."), i)
|
||||
|
||||
|
||||
|
||||
@ -1867,7 +1902,7 @@ class Deleter(GridTestMixin, testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
self.set_up_grid(oneshare=True)
|
||||
c0 = self.g.clients[0]
|
||||
d = c0.create_dirnode()
|
||||
small = upload.Data("Small enough for a LIT", None)
|
||||
small = upload.Data(b"Small enough for a LIT", None)
|
||||
def _created_dir(dn):
|
||||
self.root = dn
|
||||
self.root_uri = dn.get_uri()
|
||||
@ -1909,10 +1944,10 @@ class Adder(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
|
||||
# root/file1
|
||||
# root/file2
|
||||
# root/dir1
|
||||
d = root_node.add_file(u'file1', upload.Data("Important Things",
|
||||
d = root_node.add_file(u'file1', upload.Data(b"Important Things",
|
||||
None))
|
||||
d.addCallback(lambda res:
|
||||
root_node.add_file(u'file2', upload.Data("Sekrit Codes", None)))
|
||||
root_node.add_file(u'file2', upload.Data(b"Sekrit Codes", None)))
|
||||
d.addCallback(lambda res:
|
||||
root_node.create_subdirectory(u"dir1"))
|
||||
d.addCallback(lambda res: root_node)
|
||||
|
@ -1,5 +1,7 @@
|
||||
"""
|
||||
Tests for ``allmydata.test.eliotutil``.
|
||||
Tests for ``allmydata.util.eliotutil``.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
|
||||
from __future__ import (
|
||||
@ -9,6 +11,10 @@ from __future__ import (
|
||||
division,
|
||||
)
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from sys import stdout
|
||||
import logging
|
||||
|
||||
@ -51,11 +57,14 @@ from ..util.eliotutil import (
|
||||
_parse_destination_description,
|
||||
_EliotLogging,
|
||||
)
|
||||
from ..util.jsonbytes import BytesJSONEncoder
|
||||
|
||||
from .common import (
|
||||
SyncTestCase,
|
||||
AsyncTestCase,
|
||||
)
|
||||
|
||||
|
||||
class EliotLoggedTestTests(AsyncTestCase):
|
||||
def test_returns_none(self):
|
||||
Message.log(hello="world")
|
||||
@ -88,7 +97,7 @@ class ParseDestinationDescriptionTests(SyncTestCase):
|
||||
reactor = object()
|
||||
self.assertThat(
|
||||
_parse_destination_description("file:-")(reactor),
|
||||
Equals(FileDestination(stdout)),
|
||||
Equals(FileDestination(stdout, encoder=BytesJSONEncoder)),
|
||||
)
|
||||
|
||||
|
||||
|
@ -102,9 +102,35 @@ class HashUtilTests(unittest.TestCase):
|
||||
got_a = base32.b2a(got)
|
||||
self.failUnlessEqual(got_a, expected_a)
|
||||
|
||||
def test_known_answers(self):
|
||||
# assert backwards compatibility
|
||||
def test_storage_index_hash_known_answers(self):
|
||||
"""
|
||||
Verify backwards compatibility by comparing ``storage_index_hash`` outputs
|
||||
for some well-known (to us) inputs.
|
||||
"""
|
||||
# This is a marginal case. b"" is not a valid aes 128 key. The
|
||||
# implementation does nothing to avoid producing a result for it,
|
||||
# though.
|
||||
self._testknown(hashutil.storage_index_hash, b"qb5igbhcc5esa6lwqorsy7e6am", b"")
|
||||
|
||||
# This is a little bit more realistic though clearly this is a poor key choice.
|
||||
self._testknown(hashutil.storage_index_hash, b"wvggbrnrezdpa5yayrgiw5nzja", b"x" * 16)
|
||||
|
||||
# Here's a much more realistic key that I generated by reading some
|
||||
# bytes from /dev/urandom. I computed the expected hash value twice.
|
||||
# First using hashlib.sha256 and then with sha256sum(1). The input
|
||||
# string given to the hash function was "43:<storage index tag>,<key>"
|
||||
# in each case.
|
||||
self._testknown(
|
||||
hashutil.storage_index_hash,
|
||||
b"aarbseqqrpsfowduchcjbonscq",
|
||||
base32.a2b(b"2ckv3dfzh6rgjis6ogfqhyxnzy"),
|
||||
)
|
||||
|
||||
def test_known_answers(self):
|
||||
"""
|
||||
Verify backwards compatibility by comparing hash outputs for some
|
||||
well-known (to us) inputs.
|
||||
"""
|
||||
self._testknown(hashutil.block_hash, b"msjr5bh4evuh7fa3zw7uovixfbvlnstr5b65mrerwfnvjxig2jvq", b"")
|
||||
self._testknown(hashutil.uri_extension_hash, b"wthsu45q7zewac2mnivoaa4ulh5xvbzdmsbuyztq2a5fzxdrnkka", b"")
|
||||
self._testknown(hashutil.plaintext_hash, b"5lz5hwz3qj3af7n6e3arblw7xzutvnd3p3fjsngqjcb7utf3x3da", b"")
|
||||
|
@ -277,6 +277,20 @@ class Provider(unittest.TestCase):
|
||||
i2p.local_i2p.assert_called_with("configdir")
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_launch_executable(self):
|
||||
i2p = mock.Mock()
|
||||
handler = object()
|
||||
i2p.launch = mock.Mock(return_value=handler)
|
||||
reactor = object()
|
||||
|
||||
with mock_i2p(i2p):
|
||||
p = i2p_provider.create(reactor,
|
||||
FakeConfig(launch=True,
|
||||
**{"i2p.executable": "myi2p"}))
|
||||
h = p.get_i2p_handler()
|
||||
self.assertIs(h, handler)
|
||||
i2p.launch.assert_called_with(i2p_configdir=None, i2p_binary="myi2p")
|
||||
|
||||
def test_handler_default(self):
|
||||
i2p = mock.Mock()
|
||||
handler = object()
|
||||
|
@ -15,7 +15,12 @@ from six import ensure_binary, ensure_text
|
||||
import os, re, itertools
|
||||
from base64 import b32decode
|
||||
import json
|
||||
from mock import Mock, patch
|
||||
from operator import (
|
||||
setitem,
|
||||
)
|
||||
from functools import (
|
||||
partial,
|
||||
)
|
||||
|
||||
from testtools.matchers import (
|
||||
Is,
|
||||
@ -84,7 +89,8 @@ class Node(testutil.SignalMixin, testutil.ReallyEqualMixin, AsyncTestCase):
|
||||
|
||||
def test_introducer_clients_unloadable(self):
|
||||
"""
|
||||
Error if introducers.yaml exists but we can't read it
|
||||
``create_introducer_clients`` raises ``EnvironmentError`` if
|
||||
``introducers.yaml`` exists but we can't read it.
|
||||
"""
|
||||
basedir = u"introducer.IntroducerNode.test_introducer_clients_unloadable"
|
||||
os.mkdir(basedir)
|
||||
@ -94,17 +100,10 @@ class Node(testutil.SignalMixin, testutil.ReallyEqualMixin, AsyncTestCase):
|
||||
f.write(u'---\n')
|
||||
os.chmod(yaml_fname, 0o000)
|
||||
self.addCleanup(lambda: os.chmod(yaml_fname, 0o700))
|
||||
# just mocking the yaml failure, as "yamlutil.safe_load" only
|
||||
# returns None on some platforms for unreadable files
|
||||
|
||||
with patch("allmydata.client.yamlutil") as p:
|
||||
p.safe_load = Mock(return_value=None)
|
||||
|
||||
fake_tub = Mock()
|
||||
config = read_config(basedir, "portnum")
|
||||
|
||||
with self.assertRaises(EnvironmentError):
|
||||
create_introducer_clients(config, fake_tub)
|
||||
config = read_config(basedir, "portnum")
|
||||
with self.assertRaises(EnvironmentError):
|
||||
create_introducer_clients(config, Tub())
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_furl(self):
|
||||
@ -1037,23 +1036,53 @@ class Signatures(SyncTestCase):
|
||||
unsign_from_foolscap, (bad_msg, sig, b"v999-key"))
|
||||
|
||||
def test_unsigned_announcement(self):
|
||||
ed25519.verifying_key_from_string(b"pub-v0-wodst6ly4f7i7akt2nxizsmmy2rlmer6apltl56zctn67wfyu5tq")
|
||||
mock_tub = Mock()
|
||||
"""
|
||||
An incorrectly signed announcement is not delivered to subscribers.
|
||||
"""
|
||||
private_key, public_key = ed25519.create_signing_keypair()
|
||||
public_key_str = ed25519.string_from_verifying_key(public_key)
|
||||
|
||||
ic = IntroducerClient(
|
||||
mock_tub,
|
||||
Tub(),
|
||||
"pb://",
|
||||
u"fake_nick",
|
||||
"0.0.0",
|
||||
"1.2.3",
|
||||
(0, u"i am a nonce"),
|
||||
"invalid",
|
||||
FilePath(self.mktemp()),
|
||||
)
|
||||
received = {}
|
||||
ic.subscribe_to("good-stuff", partial(setitem, received))
|
||||
|
||||
# Deliver a good message to prove our test code is valid.
|
||||
ann = {"service-name": "good-stuff", "payload": "hello"}
|
||||
ann_t = sign_to_foolscap(ann, private_key)
|
||||
ic.got_announcements([ann_t])
|
||||
|
||||
self.assertEqual(
|
||||
{public_key_str[len("pub-"):]: ann},
|
||||
received,
|
||||
)
|
||||
received.clear()
|
||||
|
||||
# Now deliver one without a valid signature and observe that it isn't
|
||||
# delivered to the subscriber.
|
||||
ann = {"service-name": "good-stuff", "payload": "bad stuff"}
|
||||
(msg, sig, key) = sign_to_foolscap(ann, private_key)
|
||||
# Drop a base32 word from the middle of the key to invalidate the
|
||||
# signature.
|
||||
sig_a = bytearray(sig)
|
||||
sig_a[20:22] = []
|
||||
sig = bytes(sig_a)
|
||||
ann_t = (msg, sig, key)
|
||||
ic.got_announcements([ann_t])
|
||||
|
||||
# The received announcements dict should remain empty because we
|
||||
# should not receive the announcement with the invalid signature.
|
||||
self.assertEqual(
|
||||
{},
|
||||
received,
|
||||
)
|
||||
self.assertEqual(0, ic._debug_counts["inbound_announcement"])
|
||||
ic.got_announcements([
|
||||
(b"message", b"v0-aaaaaaa", b"v0-wodst6ly4f7i7akt2nxizsmmy2rlmer6apltl56zctn67wfyu5tq")
|
||||
])
|
||||
# we should have rejected this announcement due to a bad signature
|
||||
self.assertEqual(0, ic._debug_counts["inbound_announcement"])
|
||||
|
||||
|
||||
# add tests of StorageFarmBroker: if it receives duplicate announcements, it
|
||||
|
@ -1,3 +1,14 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from twisted.trial.unittest import TestCase
|
||||
|
||||
|
@ -6,7 +6,7 @@ from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
from future.utils import PY2, native_str
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
@ -15,7 +15,6 @@ import os
|
||||
import stat
|
||||
import sys
|
||||
import time
|
||||
import mock
|
||||
from textwrap import dedent
|
||||
import configparser
|
||||
|
||||
@ -39,9 +38,13 @@ import foolscap.logging.log
|
||||
|
||||
from twisted.application import service
|
||||
from allmydata.node import (
|
||||
PortAssignmentRequired,
|
||||
PrivacyError,
|
||||
tub_listen_on,
|
||||
create_tub_options,
|
||||
create_main_tub,
|
||||
create_node_dir,
|
||||
create_default_connection_handlers,
|
||||
create_connection_handlers,
|
||||
config_from_string,
|
||||
read_config,
|
||||
@ -64,6 +67,9 @@ from allmydata.util.i2p_provider import create as create_i2p_provider
|
||||
from allmydata.util.tor_provider import create as create_tor_provider
|
||||
import allmydata.test.common_util as testutil
|
||||
|
||||
from .common import (
|
||||
ConstantAddresses,
|
||||
)
|
||||
|
||||
def port_numbers():
|
||||
return integers(min_value=1, max_value=2 ** 16 - 1)
|
||||
@ -85,7 +91,7 @@ def testing_tub(config_data=''):
|
||||
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
handlers = create_connection_handlers(reactor, config, i2p_provider, tor_provider)
|
||||
handlers = create_connection_handlers(config, i2p_provider, tor_provider)
|
||||
default_connection_handlers, foolscap_connection_handlers = handlers
|
||||
tub_options = create_tub_options(config)
|
||||
|
||||
@ -511,27 +517,63 @@ class TestCase(testutil.SignalMixin, unittest.TestCase):
|
||||
new_config.get_config("foo", "bar")
|
||||
|
||||
|
||||
def _stub_get_local_addresses_sync():
|
||||
"""
|
||||
A function like ``allmydata.util.iputil.get_local_addresses_sync``.
|
||||
"""
|
||||
return ["LOCAL"]
|
||||
|
||||
|
||||
def _stub_allocate_tcp_port():
|
||||
"""
|
||||
A function like ``allmydata.util.iputil.allocate_tcp_port``.
|
||||
"""
|
||||
return 999
|
||||
|
||||
|
||||
class TestMissingPorts(unittest.TestCase):
|
||||
"""
|
||||
Test certain error-cases for ports setup
|
||||
Test certain ``_tub_portlocation`` error cases for ports setup.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.basedir = self.mktemp()
|
||||
create_node_dir(self.basedir, "testing")
|
||||
|
||||
def test_listen_on_zero(self):
|
||||
"""
|
||||
``_tub_portlocation`` raises ``PortAssignmentRequired`` called with a
|
||||
listen address including port 0 and no interface.
|
||||
"""
|
||||
config_data = (
|
||||
"[node]\n"
|
||||
"tub.port = tcp:0\n"
|
||||
)
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
with self.assertRaises(PortAssignmentRequired):
|
||||
_tub_portlocation(config, None, None)
|
||||
|
||||
def test_listen_on_zero_with_host(self):
|
||||
"""
|
||||
``_tub_portlocation`` raises ``PortAssignmentRequired`` called with a
|
||||
listen address including port 0 and an interface.
|
||||
"""
|
||||
config_data = (
|
||||
"[node]\n"
|
||||
"tub.port = tcp:0:interface=127.0.0.1\n"
|
||||
)
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
with self.assertRaises(PortAssignmentRequired):
|
||||
_tub_portlocation(config, None, None)
|
||||
test_listen_on_zero_with_host.todo = native_str(
|
||||
"https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3563"
|
||||
)
|
||||
|
||||
def test_parsing_tcp(self):
|
||||
"""
|
||||
parse explicit tub.port with explicitly-default tub.location
|
||||
When ``tub.port`` is given and ``tub.location`` is **AUTO** the port
|
||||
number from ``tub.port`` is used as the port number for the value
|
||||
constructed for ``tub.location``.
|
||||
"""
|
||||
get_addr = mock.patch(
|
||||
"allmydata.util.iputil.get_local_addresses_sync",
|
||||
return_value=["LOCAL"],
|
||||
)
|
||||
alloc_port = mock.patch(
|
||||
"allmydata.util.iputil.allocate_tcp_port",
|
||||
return_value=999,
|
||||
)
|
||||
config_data = (
|
||||
"[node]\n"
|
||||
"tub.port = tcp:777\n"
|
||||
@ -539,8 +581,11 @@ class TestMissingPorts(unittest.TestCase):
|
||||
)
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with get_addr, alloc_port:
|
||||
tubport, tublocation = _tub_portlocation(config)
|
||||
tubport, tublocation = _tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertEqual(tubport, "tcp:777")
|
||||
self.assertEqual(tublocation, b"tcp:LOCAL:777")
|
||||
|
||||
@ -548,21 +593,16 @@ class TestMissingPorts(unittest.TestCase):
|
||||
"""
|
||||
parse empty config, check defaults
|
||||
"""
|
||||
get_addr = mock.patch(
|
||||
"allmydata.util.iputil.get_local_addresses_sync",
|
||||
return_value=["LOCAL"],
|
||||
)
|
||||
alloc_port = mock.patch(
|
||||
"allmydata.util.iputil.allocate_tcp_port",
|
||||
return_value=999,
|
||||
)
|
||||
config_data = (
|
||||
"[node]\n"
|
||||
)
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with get_addr, alloc_port:
|
||||
tubport, tublocation = _tub_portlocation(config)
|
||||
tubport, tublocation = _tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertEqual(tubport, "tcp:999")
|
||||
self.assertEqual(tublocation, b"tcp:LOCAL:999")
|
||||
|
||||
@ -570,22 +610,17 @@ class TestMissingPorts(unittest.TestCase):
|
||||
"""
|
||||
location with two options (including defaults)
|
||||
"""
|
||||
get_addr = mock.patch(
|
||||
"allmydata.util.iputil.get_local_addresses_sync",
|
||||
return_value=["LOCAL"],
|
||||
)
|
||||
alloc_port = mock.patch(
|
||||
"allmydata.util.iputil.allocate_tcp_port",
|
||||
return_value=999,
|
||||
)
|
||||
config_data = (
|
||||
"[node]\n"
|
||||
"tub.location = tcp:HOST:888,AUTO\n"
|
||||
)
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with get_addr, alloc_port:
|
||||
tubport, tublocation = _tub_portlocation(config)
|
||||
tubport, tublocation = _tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertEqual(tubport, "tcp:999")
|
||||
self.assertEqual(tublocation, b"tcp:HOST:888,tcp:LOCAL:999")
|
||||
|
||||
@ -593,14 +628,6 @@ class TestMissingPorts(unittest.TestCase):
|
||||
"""
|
||||
parse config with both port + location disabled
|
||||
"""
|
||||
get_addr = mock.patch(
|
||||
"allmydata.util.iputil.get_local_addresses_sync",
|
||||
return_value=["LOCAL"],
|
||||
)
|
||||
alloc_port = mock.patch(
|
||||
"allmydata.util.iputil.allocate_tcp_port",
|
||||
return_value=999,
|
||||
)
|
||||
config_data = (
|
||||
"[node]\n"
|
||||
"tub.port = disabled\n"
|
||||
@ -608,8 +635,11 @@ class TestMissingPorts(unittest.TestCase):
|
||||
)
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with get_addr, alloc_port:
|
||||
res = _tub_portlocation(config)
|
||||
res = _tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertTrue(res is None)
|
||||
|
||||
def test_empty_tub_port(self):
|
||||
@ -623,7 +653,11 @@ class TestMissingPorts(unittest.TestCase):
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
_tub_portlocation(config)
|
||||
_tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertIn(
|
||||
"tub.port must not be empty",
|
||||
str(ctx.exception)
|
||||
@ -640,7 +674,11 @@ class TestMissingPorts(unittest.TestCase):
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
_tub_portlocation(config)
|
||||
_tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertIn(
|
||||
"tub.location must not be empty",
|
||||
str(ctx.exception)
|
||||
@ -658,7 +696,11 @@ class TestMissingPorts(unittest.TestCase):
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
_tub_portlocation(config)
|
||||
_tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertIn(
|
||||
"tub.port is disabled, but not tub.location",
|
||||
str(ctx.exception)
|
||||
@ -676,12 +718,62 @@ class TestMissingPorts(unittest.TestCase):
|
||||
config = config_from_string(self.basedir, "portnum", config_data)
|
||||
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
_tub_portlocation(config)
|
||||
_tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertIn(
|
||||
"tub.location is disabled, but not tub.port",
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_tub_location_tcp(self):
|
||||
"""
|
||||
If ``reveal-IP-address`` is set to false and ``tub.location`` includes a
|
||||
**tcp** hint then ``_tub_portlocation`` raises `PrivacyError`` because
|
||||
TCP leaks IP addresses.
|
||||
"""
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
"[node]\nreveal-IP-address = false\ntub.location=tcp:hostname:1234\n",
|
||||
)
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
_tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tub.location includes tcp: hint",
|
||||
)
|
||||
|
||||
def test_tub_location_legacy_tcp(self):
|
||||
"""
|
||||
If ``reveal-IP-address`` is set to false and ``tub.location`` includes a
|
||||
"legacy" hint with no explicit type (which means it is a **tcp** hint)
|
||||
then the behavior is the same as for an explicit **tcp** hint.
|
||||
"""
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
"[node]\nreveal-IP-address = false\ntub.location=hostname:1234\n",
|
||||
)
|
||||
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
_tub_portlocation(
|
||||
config,
|
||||
_stub_get_local_addresses_sync,
|
||||
_stub_allocate_tcp_port,
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tub.location includes tcp: hint",
|
||||
)
|
||||
|
||||
|
||||
BASE_CONFIG = """
|
||||
[tor]
|
||||
@ -725,33 +817,6 @@ class FakeTub(object):
|
||||
|
||||
class Listeners(unittest.TestCase):
|
||||
|
||||
def test_listen_on_zero(self):
|
||||
"""
|
||||
Trying to listen on port 0 should be an error
|
||||
"""
|
||||
basedir = self.mktemp()
|
||||
create_node_dir(basedir, "testing")
|
||||
with open(os.path.join(basedir, "tahoe.cfg"), "w") as f:
|
||||
f.write(BASE_CONFIG)
|
||||
f.write("[node]\n")
|
||||
f.write("tub.port = tcp:0\n")
|
||||
f.write("tub.location = AUTO\n")
|
||||
|
||||
config = client.read_config(basedir, "client.port")
|
||||
i2p_provider = mock.Mock()
|
||||
tor_provider = mock.Mock()
|
||||
dfh, fch = create_connection_handlers(None, config, i2p_provider, tor_provider)
|
||||
tub_options = create_tub_options(config)
|
||||
t = FakeTub()
|
||||
|
||||
with mock.patch("allmydata.node.Tub", return_value=t):
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
create_main_tub(config, tub_options, dfh, fch, i2p_provider, tor_provider)
|
||||
self.assertIn(
|
||||
"you must choose",
|
||||
str(ctx.exception),
|
||||
)
|
||||
|
||||
# Randomly allocate a couple distinct port numbers to try out. The test
|
||||
# never actually binds these port numbers so we don't care if they're "in
|
||||
# use" on the system or not. We just want a couple distinct values we can
|
||||
@ -763,62 +828,39 @@ class Listeners(unittest.TestCase):
|
||||
``tub.location`` configuration, the node's *main* port listens on all
|
||||
of them.
|
||||
"""
|
||||
basedir = self.mktemp()
|
||||
config_fname = os.path.join(basedir, "tahoe.cfg")
|
||||
os.mkdir(basedir)
|
||||
os.mkdir(os.path.join(basedir, "private"))
|
||||
port1, port2 = iter(ports)
|
||||
port = ("tcp:%d:interface=127.0.0.1,tcp:%d:interface=127.0.0.1" %
|
||||
(port1, port2))
|
||||
location = "tcp:localhost:%d,tcp:localhost:%d" % (port1, port2)
|
||||
with open(config_fname, "w") as f:
|
||||
f.write(BASE_CONFIG)
|
||||
f.write("[node]\n")
|
||||
f.write("tub.port = %s\n" % port)
|
||||
f.write("tub.location = %s\n" % location)
|
||||
|
||||
config = client.read_config(basedir, "client.port")
|
||||
i2p_provider = mock.Mock()
|
||||
tor_provider = mock.Mock()
|
||||
dfh, fch = create_connection_handlers(None, config, i2p_provider, tor_provider)
|
||||
tub_options = create_tub_options(config)
|
||||
t = FakeTub()
|
||||
|
||||
with mock.patch("allmydata.node.Tub", return_value=t):
|
||||
create_main_tub(config, tub_options, dfh, fch, i2p_provider, tor_provider)
|
||||
tub_listen_on(None, None, t, port, location)
|
||||
self.assertEqual(t.listening_ports,
|
||||
["tcp:%d:interface=127.0.0.1" % port1,
|
||||
"tcp:%d:interface=127.0.0.1" % port2])
|
||||
|
||||
def test_tor_i2p_listeners(self):
|
||||
basedir = self.mktemp()
|
||||
config_fname = os.path.join(basedir, "tahoe.cfg")
|
||||
os.mkdir(basedir)
|
||||
os.mkdir(os.path.join(basedir, "private"))
|
||||
with open(config_fname, "w") as f:
|
||||
f.write(BASE_CONFIG)
|
||||
f.write("[node]\n")
|
||||
f.write("tub.port = listen:i2p,listen:tor\n")
|
||||
f.write("tub.location = tcp:example.org:1234\n")
|
||||
config = client.read_config(basedir, "client.port")
|
||||
tub_options = create_tub_options(config)
|
||||
"""
|
||||
When configured to listen on an "i2p" or "tor" address, ``tub_listen_on``
|
||||
tells the Tub to listen on endpoints supplied by the given Tor and I2P
|
||||
providers.
|
||||
"""
|
||||
t = FakeTub()
|
||||
|
||||
i2p_provider = mock.Mock()
|
||||
tor_provider = mock.Mock()
|
||||
dfh, fch = create_connection_handlers(None, config, i2p_provider, tor_provider)
|
||||
i2p_listener = object()
|
||||
i2p_provider = ConstantAddresses(i2p_listener)
|
||||
tor_listener = object()
|
||||
tor_provider = ConstantAddresses(tor_listener)
|
||||
|
||||
with mock.patch("allmydata.node.Tub", return_value=t):
|
||||
create_main_tub(config, tub_options, dfh, fch, i2p_provider, tor_provider)
|
||||
|
||||
self.assertEqual(i2p_provider.get_listener.mock_calls, [mock.call()])
|
||||
self.assertEqual(tor_provider.get_listener.mock_calls, [mock.call()])
|
||||
tub_listen_on(
|
||||
i2p_provider,
|
||||
tor_provider,
|
||||
t,
|
||||
"listen:i2p,listen:tor",
|
||||
"tcp:example.org:1234",
|
||||
)
|
||||
self.assertEqual(
|
||||
t.listening_ports,
|
||||
[
|
||||
i2p_provider.get_listener(),
|
||||
tor_provider.get_listener(),
|
||||
]
|
||||
[i2p_listener, tor_listener],
|
||||
)
|
||||
|
||||
|
||||
@ -926,19 +968,9 @@ class Configuration(unittest.TestCase):
|
||||
|
||||
|
||||
|
||||
class FakeProvider(object):
|
||||
"""Emulate Tor and I2P providers."""
|
||||
|
||||
def get_tor_handler(self):
|
||||
return "TORHANDLER!"
|
||||
|
||||
def get_i2p_handler(self):
|
||||
return "I2PHANDLER!"
|
||||
|
||||
|
||||
class CreateConnectionHandlers(unittest.TestCase):
|
||||
class CreateDefaultConnectionHandlersTests(unittest.TestCase):
|
||||
"""
|
||||
Tests for create_connection_handlers().
|
||||
Tests for create_default_connection_handlers().
|
||||
"""
|
||||
|
||||
def test_tcp_disabled(self):
|
||||
@ -949,9 +981,8 @@ class CreateConnectionHandlers(unittest.TestCase):
|
||||
[connections]
|
||||
tcp = disabled
|
||||
"""))
|
||||
reactor = object() # it's not actually used?!
|
||||
provider = FakeProvider()
|
||||
default_handlers, _ = create_connection_handlers(
|
||||
reactor, config, provider, provider
|
||||
default_handlers = create_default_connection_handlers(
|
||||
config,
|
||||
{},
|
||||
)
|
||||
self.assertIs(default_handlers["tcp"], None)
|
||||
|
@ -101,3 +101,56 @@ class Observer(unittest.TestCase):
|
||||
d.addCallback(_step2)
|
||||
d.addCallback(_check2)
|
||||
return d
|
||||
|
||||
def test_observer_list_reentrant(self):
|
||||
"""
|
||||
``ObserverList`` is reentrant.
|
||||
"""
|
||||
observed = []
|
||||
|
||||
def observer_one():
|
||||
obs.unsubscribe(observer_one)
|
||||
|
||||
def observer_two():
|
||||
observed.append(None)
|
||||
|
||||
obs = observer.ObserverList()
|
||||
obs.subscribe(observer_one)
|
||||
obs.subscribe(observer_two)
|
||||
obs.notify()
|
||||
|
||||
self.assertEqual([None], observed)
|
||||
|
||||
def test_observer_list_observer_errors(self):
|
||||
"""
|
||||
An error in an earlier observer does not prevent notification from being
|
||||
delivered to a later observer.
|
||||
"""
|
||||
observed = []
|
||||
|
||||
def observer_one():
|
||||
raise Exception("Some problem here")
|
||||
|
||||
def observer_two():
|
||||
observed.append(None)
|
||||
|
||||
obs = observer.ObserverList()
|
||||
obs.subscribe(observer_one)
|
||||
obs.subscribe(observer_two)
|
||||
obs.notify()
|
||||
|
||||
self.assertEqual([None], observed)
|
||||
self.assertEqual(1, len(self.flushLoggedErrors(Exception)))
|
||||
|
||||
def test_observer_list_propagate_keyboardinterrupt(self):
|
||||
"""
|
||||
``KeyboardInterrupt`` escapes ``ObserverList.notify``.
|
||||
"""
|
||||
def observer_one():
|
||||
raise KeyboardInterrupt()
|
||||
|
||||
obs = observer.ObserverList()
|
||||
obs.subscribe(observer_one)
|
||||
|
||||
with self.assertRaises(KeyboardInterrupt):
|
||||
obs.notify()
|
||||
|
@ -34,6 +34,7 @@ from ._twisted_9607 import (
|
||||
)
|
||||
from ..util.eliotutil import (
|
||||
inline_callbacks,
|
||||
log_call_deferred,
|
||||
)
|
||||
|
||||
def get_root_from_file(src):
|
||||
@ -54,6 +55,7 @@ rootdir = get_root_from_file(srcfile)
|
||||
|
||||
|
||||
class RunBinTahoeMixin(object):
|
||||
@log_call_deferred(action_type="run-bin-tahoe")
|
||||
def run_bintahoe(self, args, stdin=None, python_options=[], env=None):
|
||||
command = sys.executable
|
||||
argv = python_options + ["-m", "allmydata.scripts.runner"] + args
|
||||
@ -142,8 +144,8 @@ class BinTahoe(common_util.SignalMixin, unittest.TestCase, RunBinTahoeMixin):
|
||||
|
||||
|
||||
class CreateNode(unittest.TestCase):
|
||||
# exercise "tahoe create-node", create-introducer, and
|
||||
# create-key-generator by calling the corresponding code as a subroutine.
|
||||
# exercise "tahoe create-node" and "tahoe create-introducer" by calling
|
||||
# the corresponding code as a subroutine.
|
||||
|
||||
def workdir(self, name):
|
||||
basedir = os.path.join("test_runner", "CreateNode", name)
|
||||
@ -251,16 +253,11 @@ class CreateNode(unittest.TestCase):
|
||||
class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
|
||||
RunBinTahoeMixin):
|
||||
"""
|
||||
exercise "tahoe run" for both introducer, client node, and key-generator,
|
||||
by spawning "tahoe run" (or "tahoe start") as a subprocess. This doesn't
|
||||
get us line-level coverage, but it does a better job of confirming that
|
||||
the user can actually run "./bin/tahoe run" and expect it to work. This
|
||||
verifies that bin/tahoe sets up PYTHONPATH and the like correctly.
|
||||
|
||||
This doesn't work on cygwin (it hangs forever), so we skip this test
|
||||
when we're on cygwin. It is likely that "tahoe start" itself doesn't
|
||||
work on cygwin: twisted seems unable to provide a version of
|
||||
spawnProcess which really works there.
|
||||
exercise "tahoe run" for both introducer and client node, by spawning
|
||||
"tahoe run" as a subprocess. This doesn't get us line-level coverage, but
|
||||
it does a better job of confirming that the user can actually run
|
||||
"./bin/tahoe run" and expect it to work. This verifies that bin/tahoe sets
|
||||
up PYTHONPATH and the like correctly.
|
||||
"""
|
||||
|
||||
def workdir(self, name):
|
||||
@ -340,7 +337,7 @@ class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
|
||||
@inline_callbacks
|
||||
def test_client(self):
|
||||
"""
|
||||
Test many things.
|
||||
Test too many things.
|
||||
|
||||
0) Verify that "tahoe create-node" takes a --webport option and writes
|
||||
the value to the configuration file.
|
||||
@ -348,9 +345,9 @@ class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
|
||||
1) Verify that "tahoe run" writes a pid file and a node url file (on POSIX).
|
||||
|
||||
2) Verify that the storage furl file has a stable value across a
|
||||
"tahoe run" / "tahoe stop" / "tahoe run" sequence.
|
||||
"tahoe run" / stop / "tahoe run" sequence.
|
||||
|
||||
3) Verify that the pid file is removed after "tahoe stop" succeeds (on POSIX).
|
||||
3) Verify that the pid file is removed after SIGTERM (on POSIX).
|
||||
"""
|
||||
basedir = self.workdir("test_client")
|
||||
c1 = os.path.join(basedir, "c1")
|
||||
@ -454,18 +451,6 @@ class RunNode(common_util.SignalMixin, unittest.TestCase, pollmixin.PollMixin,
|
||||
"does not look like a directory at all"
|
||||
)
|
||||
|
||||
def test_stop_bad_directory(self):
|
||||
"""
|
||||
If ``tahoe run`` is pointed at a directory where no node is running, it
|
||||
reports an error and exits.
|
||||
"""
|
||||
return self._bad_directory_test(
|
||||
u"test_stop_bad_directory",
|
||||
"tahoe stop",
|
||||
lambda tahoe, p: tahoe.stop(p),
|
||||
"does not look like a running node directory",
|
||||
)
|
||||
|
||||
@inline_callbacks
|
||||
def _bad_directory_test(self, workdir, description, operation, expected_message):
|
||||
"""
|
||||
|
@ -457,7 +457,8 @@ class StoragePluginWebPresence(AsyncTestCase):
|
||||
self.storage_plugin = u"tahoe-lafs-dummy-v1"
|
||||
|
||||
from twisted.internet import reactor
|
||||
_, port_endpoint = self.port_assigner.assign(reactor)
|
||||
_, webport_endpoint = self.port_assigner.assign(reactor)
|
||||
tubport_location, tubport_endpoint = self.port_assigner.assign(reactor)
|
||||
|
||||
tempdir = TempDir()
|
||||
self.useFixture(tempdir)
|
||||
@ -468,8 +469,12 @@ class StoragePluginWebPresence(AsyncTestCase):
|
||||
"web": "1",
|
||||
},
|
||||
node_config={
|
||||
"tub.location": "127.0.0.1:1",
|
||||
"web.port": ensure_text(port_endpoint),
|
||||
# We don't really need the main Tub listening but if we
|
||||
# disable it then we also have to disable storage (because
|
||||
# config validation policy).
|
||||
"tub.port": tubport_endpoint,
|
||||
"tub.location": tubport_location,
|
||||
"web.port": ensure_text(webport_endpoint),
|
||||
},
|
||||
storage_plugin=self.storage_plugin,
|
||||
basedir=self.basedir,
|
||||
|
@ -70,7 +70,7 @@ def renderJSON(resource):
|
||||
"""
|
||||
Render a JSON from the given resource.
|
||||
"""
|
||||
return render(resource, {"t": ["json"]})
|
||||
return render(resource, {b"t": [b"json"]})
|
||||
|
||||
class MyBucketCountingCrawler(BucketCountingCrawler):
|
||||
def finished_prefix(self, cycle, prefix):
|
||||
|
@ -1,7 +1,22 @@
|
||||
"""
|
||||
Ported to Python 3, partially: test_filesystem* will be done in a future round.
|
||||
"""
|
||||
from __future__ import print_function
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2, PY3
|
||||
if PY2:
|
||||
# Don't import bytes since it causes issues on (so far unported) modules on Python 2.
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, dict, list, object, range, max, min, str # noqa: F401
|
||||
|
||||
from past.builtins import chr as byteschr, long
|
||||
from six import ensure_text, ensure_str
|
||||
|
||||
import os, re, sys, time, json
|
||||
from functools import partial
|
||||
from unittest import skipIf
|
||||
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
@ -40,7 +55,7 @@ from .common import (
|
||||
TEST_RSA_KEY_SIZE,
|
||||
SameProcessStreamEndpointAssigner,
|
||||
)
|
||||
from .common_web import do_http, Error
|
||||
from .common_web import do_http as do_http_bytes, Error
|
||||
from .web.common import (
|
||||
assert_soup_has_tag_with_attributes
|
||||
)
|
||||
@ -48,12 +63,34 @@ from .web.common import (
|
||||
# TODO: move this to common or common_util
|
||||
from allmydata.test.test_runner import RunBinTahoeMixin
|
||||
from . import common_util as testutil
|
||||
from .common_util import run_cli
|
||||
from .common_util import run_cli_unicode
|
||||
from ..scripts.common import (
|
||||
write_introducer,
|
||||
)
|
||||
|
||||
LARGE_DATA = """
|
||||
def run_cli(*args, **kwargs):
|
||||
"""
|
||||
Run a Tahoe-LAFS CLI utility, but inline.
|
||||
|
||||
Version of run_cli_unicode() that takes any kind of string, and the
|
||||
command-line args inline instead of as verb + list.
|
||||
|
||||
Backwards compatible version so we don't have to change all the tests that
|
||||
expected this API.
|
||||
"""
|
||||
nodeargs = [ensure_text(a) for a in kwargs.pop("nodeargs", [])]
|
||||
kwargs["nodeargs"] = nodeargs
|
||||
return run_cli_unicode(
|
||||
ensure_text(args[0]), [ensure_text(a) for a in args[1:]], **kwargs)
|
||||
|
||||
|
||||
def do_http(*args, **kwargs):
|
||||
"""Wrapper for do_http() that returns Unicode."""
|
||||
return do_http_bytes(*args, **kwargs).addCallback(
|
||||
lambda b: str(b, "utf-8"))
|
||||
|
||||
|
||||
LARGE_DATA = b"""
|
||||
This is some data to publish to the remote grid.., which needs to be large
|
||||
enough to not fit inside a LIT uri.
|
||||
"""
|
||||
@ -627,9 +664,9 @@ def flush_but_dont_ignore(res):
|
||||
|
||||
def _render_config(config):
|
||||
"""
|
||||
Convert a ``dict`` of ``dict`` of ``bytes`` to an ini-format string.
|
||||
Convert a ``dict`` of ``dict`` of ``unicode`` to an ini-format string.
|
||||
"""
|
||||
return "\n\n".join(list(
|
||||
return u"\n\n".join(list(
|
||||
_render_config_section(k, v)
|
||||
for (k, v)
|
||||
in config.items()
|
||||
@ -637,20 +674,20 @@ def _render_config(config):
|
||||
|
||||
def _render_config_section(heading, values):
|
||||
"""
|
||||
Convert a ``bytes`` heading and a ``dict`` of ``bytes`` to an ini-format
|
||||
section as ``bytes``.
|
||||
Convert a ``unicode`` heading and a ``dict`` of ``unicode`` to an ini-format
|
||||
section as ``unicode``.
|
||||
"""
|
||||
return "[{}]\n{}\n".format(
|
||||
return u"[{}]\n{}\n".format(
|
||||
heading, _render_section_values(values)
|
||||
)
|
||||
|
||||
def _render_section_values(values):
|
||||
"""
|
||||
Convert a ``dict`` of ``bytes`` to the body of an ini-format section as
|
||||
``bytes``.
|
||||
Convert a ``dict`` of ``unicode`` to the body of an ini-format section as
|
||||
``unicode``.
|
||||
"""
|
||||
return "\n".join(list(
|
||||
"{} = {}".format(k, v)
|
||||
return u"\n".join(list(
|
||||
u"{} = {}".format(k, v)
|
||||
for (k, v)
|
||||
in sorted(values.items())
|
||||
))
|
||||
@ -753,7 +790,7 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
|
||||
self.helper_furl = helper_furl
|
||||
if self.numclients >= 4:
|
||||
with open(os.path.join(basedirs[3], 'tahoe.cfg'), 'ab+') as f:
|
||||
with open(os.path.join(basedirs[3], 'tahoe.cfg'), 'a+') as f:
|
||||
f.write(
|
||||
"[client]\n"
|
||||
"helper.furl = {}\n".format(helper_furl)
|
||||
@ -796,8 +833,6 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
|
||||
def setconf(config, which, section, feature, value):
|
||||
if which in feature_matrix.get((section, feature), {which}):
|
||||
if isinstance(value, unicode):
|
||||
value = value.encode("utf-8")
|
||||
config.setdefault(section, {})[feature] = value
|
||||
|
||||
setnode = partial(setconf, config, which, "node")
|
||||
@ -870,7 +905,7 @@ class SystemTestMixin(pollmixin.PollMixin, testutil.StallMixin):
|
||||
config = "[client]\n"
|
||||
if helper_furl:
|
||||
config += "helper.furl = %s\n" % helper_furl
|
||||
basedir.child("tahoe.cfg").setContent(config)
|
||||
basedir.child("tahoe.cfg").setContent(config.encode("utf-8"))
|
||||
private = basedir.child("private")
|
||||
private.makedirs()
|
||||
write_introducer(
|
||||
@ -980,12 +1015,12 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
|
||||
def test_upload_and_download_convergent(self):
|
||||
self.basedir = "system/SystemTest/test_upload_and_download_convergent"
|
||||
return self._test_upload_and_download(convergence="some convergence string")
|
||||
return self._test_upload_and_download(convergence=b"some convergence string")
|
||||
|
||||
def _test_upload_and_download(self, convergence):
|
||||
# we use 4000 bytes of data, which will result in about 400k written
|
||||
# to disk among all our simulated nodes
|
||||
DATA = "Some data to upload\n" * 200
|
||||
DATA = b"Some data to upload\n" * 200
|
||||
d = self.set_up_nodes()
|
||||
def _check_connections(res):
|
||||
for c in self.clients:
|
||||
@ -993,7 +1028,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
all_peerids = c.get_storage_broker().get_all_serverids()
|
||||
self.failUnlessEqual(len(all_peerids), self.numclients)
|
||||
sb = c.storage_broker
|
||||
permuted_peers = sb.get_servers_for_psi("a")
|
||||
permuted_peers = sb.get_servers_for_psi(b"a")
|
||||
self.failUnlessEqual(len(permuted_peers), self.numclients)
|
||||
d.addCallback(_check_connections)
|
||||
|
||||
@ -1016,7 +1051,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
theuri = results.get_uri()
|
||||
log.msg("upload finished: uri is %s" % (theuri,))
|
||||
self.uri = theuri
|
||||
assert isinstance(self.uri, str), self.uri
|
||||
assert isinstance(self.uri, bytes), self.uri
|
||||
self.cap = uri.from_string(self.uri)
|
||||
self.n = self.clients[1].create_node_from_uri(self.uri)
|
||||
d.addCallback(_upload_done)
|
||||
@ -1050,17 +1085,17 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
d.addCallback(lambda ign:
|
||||
n.read(MemoryConsumer(), offset=1, size=4))
|
||||
def _read_portion_done(mc):
|
||||
self.failUnlessEqual("".join(mc.chunks), DATA[1:1+4])
|
||||
self.failUnlessEqual(b"".join(mc.chunks), DATA[1:1+4])
|
||||
d.addCallback(_read_portion_done)
|
||||
d.addCallback(lambda ign:
|
||||
n.read(MemoryConsumer(), offset=2, size=None))
|
||||
def _read_tail_done(mc):
|
||||
self.failUnlessEqual("".join(mc.chunks), DATA[2:])
|
||||
self.failUnlessEqual(b"".join(mc.chunks), DATA[2:])
|
||||
d.addCallback(_read_tail_done)
|
||||
d.addCallback(lambda ign:
|
||||
n.read(MemoryConsumer(), size=len(DATA)+1000))
|
||||
def _read_too_much(mc):
|
||||
self.failUnlessEqual("".join(mc.chunks), DATA)
|
||||
self.failUnlessEqual(b"".join(mc.chunks), DATA)
|
||||
d.addCallback(_read_too_much)
|
||||
|
||||
return d
|
||||
@ -1110,7 +1145,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
return connected
|
||||
d.addCallback(lambda ign: self.poll(_has_helper))
|
||||
|
||||
HELPER_DATA = "Data that needs help to upload" * 1000
|
||||
HELPER_DATA = b"Data that needs help to upload" * 1000
|
||||
def _upload_with_helper(res):
|
||||
u = upload.Data(HELPER_DATA, convergence=convergence)
|
||||
d = self.extra_node.upload(u)
|
||||
@ -1144,7 +1179,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
d.addCallback(fireEventually)
|
||||
|
||||
def _upload_resumable(res):
|
||||
DATA = "Data that needs help to upload and gets interrupted" * 1000
|
||||
DATA = b"Data that needs help to upload and gets interrupted" * 1000
|
||||
u1 = CountingDataUploadable(DATA, convergence=convergence)
|
||||
u2 = CountingDataUploadable(DATA, convergence=convergence)
|
||||
|
||||
@ -1266,7 +1301,9 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
s = stats["stats"]
|
||||
self.failUnlessEqual(s["storage_server.accepting_immutable_shares"], 1)
|
||||
c = stats["counters"]
|
||||
self.failUnless("storage_server.allocate" in c)
|
||||
# Probably this should be Unicode eventually? But we haven't ported
|
||||
# stats code yet.
|
||||
self.failUnless(b"storage_server.allocate" in c)
|
||||
d.addCallback(_grab_stats)
|
||||
|
||||
return d
|
||||
@ -1287,7 +1324,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
assert pieces[-5].startswith("client")
|
||||
client_num = int(pieces[-5][-1])
|
||||
storage_index_s = pieces[-1]
|
||||
storage_index = si_a2b(storage_index_s)
|
||||
storage_index = si_a2b(storage_index_s.encode("ascii"))
|
||||
for sharename in filenames:
|
||||
shnum = int(sharename)
|
||||
filename = os.path.join(dirpath, sharename)
|
||||
@ -1320,7 +1357,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
elif which == "signature":
|
||||
signature = self.flip_bit(signature)
|
||||
elif which == "share_hash_chain":
|
||||
nodenum = share_hash_chain.keys()[0]
|
||||
nodenum = list(share_hash_chain.keys())[0]
|
||||
share_hash_chain[nodenum] = self.flip_bit(share_hash_chain[nodenum])
|
||||
elif which == "block_hash_tree":
|
||||
block_hash_tree[-1] = self.flip_bit(block_hash_tree[-1])
|
||||
@ -1343,11 +1380,11 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
|
||||
def test_mutable(self):
|
||||
self.basedir = "system/SystemTest/test_mutable"
|
||||
DATA = "initial contents go here." # 25 bytes % 3 != 0
|
||||
DATA = b"initial contents go here." # 25 bytes % 3 != 0
|
||||
DATA_uploadable = MutableData(DATA)
|
||||
NEWDATA = "new contents yay"
|
||||
NEWDATA = b"new contents yay"
|
||||
NEWDATA_uploadable = MutableData(NEWDATA)
|
||||
NEWERDATA = "this is getting old"
|
||||
NEWERDATA = b"this is getting old"
|
||||
NEWERDATA_uploadable = MutableData(NEWERDATA)
|
||||
|
||||
d = self.set_up_nodes()
|
||||
@ -1396,7 +1433,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
self.failUnless(" share_hash_chain: " in output)
|
||||
self.failUnless(" block_hash_tree: 1 nodes\n" in output)
|
||||
expected = (" verify-cap: URI:SSK-Verifier:%s:" %
|
||||
base32.b2a(storage_index))
|
||||
str(base32.b2a(storage_index), "ascii"))
|
||||
self.failUnless(expected in output)
|
||||
except unittest.FailTest:
|
||||
print()
|
||||
@ -1475,7 +1512,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
for (client_num, storage_index, filename, shnum)
|
||||
in shares ])
|
||||
assert len(where) == 10 # this test is designed for 3-of-10
|
||||
for shnum, filename in where.items():
|
||||
for shnum, filename in list(where.items()):
|
||||
# shares 7,8,9 are left alone. read will check
|
||||
# (share_hash_chain, block_hash_tree, share_data). New
|
||||
# seqnum+R pairs will trigger a check of (seqnum, R, IV,
|
||||
@ -1525,9 +1562,9 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
def _check_empty_file(res):
|
||||
# make sure we can create empty files, this usually screws up the
|
||||
# segsize math
|
||||
d1 = self.clients[2].create_mutable_file(MutableData(""))
|
||||
d1 = self.clients[2].create_mutable_file(MutableData(b""))
|
||||
d1.addCallback(lambda newnode: newnode.download_best_version())
|
||||
d1.addCallback(lambda res: self.failUnlessEqual("", res))
|
||||
d1.addCallback(lambda res: self.failUnlessEqual(b"", res))
|
||||
return d1
|
||||
d.addCallback(_check_empty_file)
|
||||
|
||||
@ -1550,7 +1587,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
return d
|
||||
|
||||
def flip_bit(self, good):
|
||||
return good[:-1] + chr(ord(good[-1]) ^ 0x01)
|
||||
return good[:-1] + byteschr(ord(good[-1:]) ^ 0x01)
|
||||
|
||||
def mangle_uri(self, gooduri):
|
||||
# change the key, which changes the storage index, which means we'll
|
||||
@ -1571,6 +1608,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# the key, which should cause the download to fail the post-download
|
||||
# plaintext_hash check.
|
||||
|
||||
@skipIf(PY3, "Python 3 web support hasn't happened yet.")
|
||||
def test_filesystem(self):
|
||||
self.basedir = "system/SystemTest/test_filesystem"
|
||||
self.data = LARGE_DATA
|
||||
@ -1632,7 +1670,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
d1.addCallback(self.log, "publish finished")
|
||||
def _stash_uri(filenode):
|
||||
self.uri = filenode.get_uri()
|
||||
assert isinstance(self.uri, str), (self.uri, filenode)
|
||||
assert isinstance(self.uri, bytes), (self.uri, filenode)
|
||||
d1.addCallback(_stash_uri)
|
||||
return d1
|
||||
d.addCallback(_made_subdir1)
|
||||
@ -1650,7 +1688,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
return res
|
||||
|
||||
def _do_publish_private(self, res):
|
||||
self.smalldata = "sssh, very secret stuff"
|
||||
self.smalldata = b"sssh, very secret stuff"
|
||||
ut = upload.Data(self.smalldata, convergence=None)
|
||||
d = self.clients[0].create_dirnode()
|
||||
d.addCallback(self.log, "GOT private directory")
|
||||
@ -1737,7 +1775,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
d1.addCallback(lambda res: self.shouldFail2(NotWriteableError, "mkdir(nope)", None, dirnode.create_subdirectory, u"nope"))
|
||||
|
||||
d1.addCallback(self.log, "doing add_file(ro)")
|
||||
ut = upload.Data("I will disappear, unrecorded and unobserved. The tragedy of my demise is made more poignant by its silence, but this beauty is not for you to ever know.", convergence="99i-p1x4-xd4-18yc-ywt-87uu-msu-zo -- completely and totally unguessable string (unless you read this)")
|
||||
ut = upload.Data(b"I will disappear, unrecorded and unobserved. The tragedy of my demise is made more poignant by its silence, but this beauty is not for you to ever know.", convergence=b"99i-p1x4-xd4-18yc-ywt-87uu-msu-zo -- completely and totally unguessable string (unless you read this)")
|
||||
d1.addCallback(lambda res: self.shouldFail2(NotWriteableError, "add_file(nope)", None, dirnode.add_file, u"hope", ut))
|
||||
|
||||
d1.addCallback(self.log, "doing get(ro)")
|
||||
@ -1801,7 +1839,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
"largest-directory-children": 3,
|
||||
"largest-immutable-file": 112,
|
||||
}
|
||||
for k,v in expected.iteritems():
|
||||
for k,v in list(expected.items()):
|
||||
self.failUnlessEqual(stats[k], v,
|
||||
"stats[%s] was %s, not %s" %
|
||||
(k, stats[k], v))
|
||||
@ -1850,33 +1888,33 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
return do_http("get", self.webish_url + urlpath)
|
||||
|
||||
def POST(self, urlpath, use_helper=False, **fields):
|
||||
sepbase = "boogabooga"
|
||||
sep = "--" + sepbase
|
||||
sepbase = b"boogabooga"
|
||||
sep = b"--" + sepbase
|
||||
form = []
|
||||
form.append(sep)
|
||||
form.append('Content-Disposition: form-data; name="_charset"')
|
||||
form.append('')
|
||||
form.append('UTF-8')
|
||||
form.append(b'Content-Disposition: form-data; name="_charset"')
|
||||
form.append(b'')
|
||||
form.append(b'UTF-8')
|
||||
form.append(sep)
|
||||
for name, value in fields.iteritems():
|
||||
for name, value in fields.items():
|
||||
if isinstance(value, tuple):
|
||||
filename, value = value
|
||||
form.append('Content-Disposition: form-data; name="%s"; '
|
||||
'filename="%s"' % (name, filename.encode("utf-8")))
|
||||
form.append(b'Content-Disposition: form-data; name="%s"; '
|
||||
b'filename="%s"' % (name, filename.encode("utf-8")))
|
||||
else:
|
||||
form.append('Content-Disposition: form-data; name="%s"' % name)
|
||||
form.append('')
|
||||
form.append(str(value))
|
||||
form.append(b'Content-Disposition: form-data; name="%s"' % name)
|
||||
form.append(b'')
|
||||
form.append(b"%s" % (value,))
|
||||
form.append(sep)
|
||||
form[-1] += "--"
|
||||
body = ""
|
||||
form[-1] += b"--"
|
||||
body = b""
|
||||
headers = {}
|
||||
if fields:
|
||||
body = "\r\n".join(form) + "\r\n"
|
||||
headers["content-type"] = "multipart/form-data; boundary=%s" % sepbase
|
||||
body = b"\r\n".join(form) + b"\r\n"
|
||||
headers["content-type"] = "multipart/form-data; boundary=%s" % str(sepbase, "ascii")
|
||||
return self.POST2(urlpath, body, headers, use_helper)
|
||||
|
||||
def POST2(self, urlpath, body="", headers={}, use_helper=False):
|
||||
def POST2(self, urlpath, body=b"", headers={}, use_helper=False):
|
||||
if use_helper:
|
||||
url = self.helper_webish_url + urlpath
|
||||
else:
|
||||
@ -1884,7 +1922,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
return do_http("post", url, data=body, headers=headers)
|
||||
|
||||
def _test_web(self, res):
|
||||
public = "uri/" + self._root_directory_uri
|
||||
public = "uri/" + str(self._root_directory_uri, "ascii")
|
||||
d = self.GET("")
|
||||
def _got_welcome(page):
|
||||
html = page.replace('\n', ' ')
|
||||
@ -1893,7 +1931,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
"I didn't see the right '%s' message in:\n%s" % (connected_re, page))
|
||||
# nodeids/tubids don't have any regexp-special characters
|
||||
nodeid_re = r'<th>Node ID:</th>\s*<td title="TubID: %s">%s</td>' % (
|
||||
self.clients[0].get_long_tubid(), self.clients[0].get_long_nodeid())
|
||||
self.clients[0].get_long_tubid(), str(self.clients[0].get_long_nodeid(), "ascii"))
|
||||
self.failUnless(re.search(nodeid_re, html),
|
||||
"I didn't see the right '%s' message in:\n%s" % (nodeid_re, page))
|
||||
self.failUnless("Helper: 0 active uploads" in page)
|
||||
@ -1954,7 +1992,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# upload a file with PUT
|
||||
d.addCallback(self.log, "about to try PUT")
|
||||
d.addCallback(lambda res: self.PUT(public + "/subdir3/new.txt",
|
||||
"new.txt contents"))
|
||||
b"new.txt contents"))
|
||||
d.addCallback(lambda res: self.GET(public + "/subdir3/new.txt"))
|
||||
d.addCallback(self.failUnlessEqual, "new.txt contents")
|
||||
# and again with something large enough to use multiple segments,
|
||||
@ -1965,23 +2003,23 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
c.encoding_params['happy'] = 1
|
||||
d.addCallback(_new_happy_semantics)
|
||||
d.addCallback(lambda res: self.PUT(public + "/subdir3/big.txt",
|
||||
"big" * 500000)) # 1.5MB
|
||||
b"big" * 500000)) # 1.5MB
|
||||
d.addCallback(lambda res: self.GET(public + "/subdir3/big.txt"))
|
||||
d.addCallback(lambda res: self.failUnlessEqual(len(res), 1500000))
|
||||
|
||||
# can we replace files in place?
|
||||
d.addCallback(lambda res: self.PUT(public + "/subdir3/new.txt",
|
||||
"NEWER contents"))
|
||||
b"NEWER contents"))
|
||||
d.addCallback(lambda res: self.GET(public + "/subdir3/new.txt"))
|
||||
d.addCallback(self.failUnlessEqual, "NEWER contents")
|
||||
|
||||
# test unlinked POST
|
||||
d.addCallback(lambda res: self.POST("uri", t="upload",
|
||||
file=("new.txt", "data" * 10000)))
|
||||
d.addCallback(lambda res: self.POST("uri", t=b"upload",
|
||||
file=("new.txt", b"data" * 10000)))
|
||||
# and again using the helper, which exercises different upload-status
|
||||
# display code
|
||||
d.addCallback(lambda res: self.POST("uri", use_helper=True, t="upload",
|
||||
file=("foo.txt", "data2" * 10000)))
|
||||
d.addCallback(lambda res: self.POST("uri", use_helper=True, t=b"upload",
|
||||
file=("foo.txt", b"data2" * 10000)))
|
||||
|
||||
# check that the status page exists
|
||||
d.addCallback(lambda res: self.GET("status"))
|
||||
@ -2105,7 +2143,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# exercise some of the diagnostic tools in runner.py
|
||||
|
||||
# find a share
|
||||
for (dirpath, dirnames, filenames) in os.walk(unicode(self.basedir)):
|
||||
for (dirpath, dirnames, filenames) in os.walk(ensure_text(self.basedir)):
|
||||
if "storage" not in dirpath:
|
||||
continue
|
||||
if not filenames:
|
||||
@ -2119,7 +2157,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
filename = os.path.join(dirpath, filenames[0])
|
||||
# peek at the magic to see if it is a chk share
|
||||
magic = open(filename, "rb").read(4)
|
||||
if magic == '\x00\x00\x00\x01':
|
||||
if magic == b'\x00\x00\x00\x01':
|
||||
break
|
||||
else:
|
||||
self.fail("unable to find any uri_extension files in %r"
|
||||
@ -2152,7 +2190,6 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# 'find-shares' tool
|
||||
sharedir, shnum = os.path.split(filename)
|
||||
storagedir, storage_index_s = os.path.split(sharedir)
|
||||
storage_index_s = str(storage_index_s)
|
||||
nodedirs = [self.getdir("client%d" % i) for i in range(self.numclients)]
|
||||
rc,out,err = yield run_cli("debug", "find-shares", storage_index_s,
|
||||
*nodedirs)
|
||||
@ -2176,7 +2213,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# allmydata.control (mostly used for performance tests)
|
||||
c0 = self.clients[0]
|
||||
control_furl_file = c0.config.get_private_path("control.furl")
|
||||
control_furl = open(control_furl_file, "r").read().strip()
|
||||
control_furl = ensure_str(open(control_furl_file, "r").read().strip())
|
||||
# it doesn't really matter which Tub we use to connect to the client,
|
||||
# so let's just use our IntroducerNode's
|
||||
d = self.introducer.tub.getReference(control_furl)
|
||||
@ -2208,7 +2245,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# sure that works, before we add other aliases.
|
||||
|
||||
root_file = os.path.join(client0_basedir, "private", "root_dir.cap")
|
||||
f = open(root_file, "w")
|
||||
f = open(root_file, "wb")
|
||||
f.write(private_uri)
|
||||
f.close()
|
||||
|
||||
@ -2290,7 +2327,8 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
files.append(fn)
|
||||
data = "data to be uploaded: file%d\n" % i
|
||||
datas.append(data)
|
||||
open(fn,"wb").write(data)
|
||||
with open(fn, "wb") as f:
|
||||
f.write(data)
|
||||
|
||||
def _check_stdout_against(out_and_err, filenum=None, data=None):
|
||||
(out, err) = out_and_err
|
||||
@ -2468,13 +2506,18 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
# recursive copy: setup
|
||||
dn = os.path.join(self.basedir, "dir1")
|
||||
os.makedirs(dn)
|
||||
open(os.path.join(dn, "rfile1"), "wb").write("rfile1")
|
||||
open(os.path.join(dn, "rfile2"), "wb").write("rfile2")
|
||||
open(os.path.join(dn, "rfile3"), "wb").write("rfile3")
|
||||
with open(os.path.join(dn, "rfile1"), "wb") as f:
|
||||
f.write("rfile1")
|
||||
with open(os.path.join(dn, "rfile2"), "wb") as f:
|
||||
f.write("rfile2")
|
||||
with open(os.path.join(dn, "rfile3"), "wb") as f:
|
||||
f.write("rfile3")
|
||||
sdn2 = os.path.join(dn, "subdir2")
|
||||
os.makedirs(sdn2)
|
||||
open(os.path.join(sdn2, "rfile4"), "wb").write("rfile4")
|
||||
open(os.path.join(sdn2, "rfile5"), "wb").write("rfile5")
|
||||
with open(os.path.join(sdn2, "rfile4"), "wb") as f:
|
||||
f.write("rfile4")
|
||||
with open(os.path.join(sdn2, "rfile5"), "wb") as f:
|
||||
f.write("rfile5")
|
||||
|
||||
# from disk into tahoe
|
||||
d.addCallback(run, "cp", "-r", dn, "tahoe:")
|
||||
@ -2551,6 +2594,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
|
||||
return d
|
||||
|
||||
@skipIf(PY3, "Python 3 CLI support hasn't happened yet.")
|
||||
def test_filesystem_with_cli_in_subprocess(self):
|
||||
# We do this in a separate test so that test_filesystem doesn't skip if we can't run bin/tahoe.
|
||||
|
||||
@ -2574,12 +2618,12 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
out, err, rc_or_sig = res
|
||||
self.failUnlessEqual(rc_or_sig, 0, str(res))
|
||||
if check_stderr:
|
||||
self.failUnlessEqual(err, "")
|
||||
self.failUnlessEqual(err, b"")
|
||||
|
||||
d.addCallback(_run_in_subprocess, "create-alias", "newalias")
|
||||
d.addCallback(_check_succeeded)
|
||||
|
||||
STDIN_DATA = "This is the file to upload from stdin."
|
||||
STDIN_DATA = b"This is the file to upload from stdin."
|
||||
d.addCallback(_run_in_subprocess, "put", "-", "newalias:tahoe-file", stdin=STDIN_DATA)
|
||||
d.addCallback(_check_succeeded, check_stderr=False)
|
||||
|
||||
@ -2601,7 +2645,7 @@ class SystemTest(SystemTestMixin, RunBinTahoeMixin, unittest.TestCase):
|
||||
return d
|
||||
|
||||
def _test_checker(self, res):
|
||||
ut = upload.Data("too big to be literal" * 200, convergence=None)
|
||||
ut = upload.Data(b"too big to be literal" * 200, convergence=None)
|
||||
d = self._personal_node.add_file(u"big file", ut)
|
||||
|
||||
d.addCallback(lambda res: self._personal_node.check(Monitor()))
|
||||
|
@ -349,6 +349,10 @@ class Provider(unittest.TestCase):
|
||||
cfs2.assert_called_with(reactor, ep_desc)
|
||||
|
||||
def test_handler_socks_endpoint(self):
|
||||
"""
|
||||
If not configured otherwise, the Tor provider returns a Socks-based
|
||||
handler.
|
||||
"""
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
tor.socks_endpoint = mock.Mock(return_value=handler)
|
||||
@ -365,6 +369,46 @@ class Provider(unittest.TestCase):
|
||||
tor.socks_endpoint.assert_called_with(ep)
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_socks_unix_endpoint(self):
|
||||
"""
|
||||
``socks.port`` can be configured as a UNIX client endpoint.
|
||||
"""
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
tor.socks_endpoint = mock.Mock(return_value=handler)
|
||||
ep = object()
|
||||
cfs = mock.Mock(return_value=ep)
|
||||
reactor = object()
|
||||
|
||||
with mock_tor(tor):
|
||||
p = tor_provider.create(reactor,
|
||||
FakeConfig(**{"socks.port": "unix:path"}))
|
||||
with mock.patch("allmydata.util.tor_provider.clientFromString", cfs):
|
||||
h = p.get_tor_handler()
|
||||
cfs.assert_called_with(reactor, "unix:path")
|
||||
tor.socks_endpoint.assert_called_with(ep)
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_socks_tcp_endpoint(self):
|
||||
"""
|
||||
``socks.port`` can be configured as a UNIX client endpoint.
|
||||
"""
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
tor.socks_endpoint = mock.Mock(return_value=handler)
|
||||
ep = object()
|
||||
cfs = mock.Mock(return_value=ep)
|
||||
reactor = object()
|
||||
|
||||
with mock_tor(tor):
|
||||
p = tor_provider.create(reactor,
|
||||
FakeConfig(**{"socks.port": "tcp:127.0.0.1:1234"}))
|
||||
with mock.patch("allmydata.util.tor_provider.clientFromString", cfs):
|
||||
h = p.get_tor_handler()
|
||||
cfs.assert_called_with(reactor, "tcp:127.0.0.1:1234")
|
||||
tor.socks_endpoint.assert_called_with(ep)
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_control_endpoint(self):
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
|
@ -33,7 +33,9 @@ if six.PY3:
|
||||
|
||||
class IDLib(unittest.TestCase):
|
||||
def test_nodeid_b2a(self):
|
||||
self.failUnlessEqual(idlib.nodeid_b2a(b"\x00"*20), "a"*32)
|
||||
result = idlib.nodeid_b2a(b"\x00"*20)
|
||||
self.assertEqual(result, "a"*32)
|
||||
self.assertIsInstance(result, str)
|
||||
|
||||
|
||||
class MyList(list):
|
||||
|
@ -25,7 +25,8 @@ def assert_soup_has_tag_with_attributes(testcase, soup, tag_name, attrs):
|
||||
tags = soup.find_all(tag_name)
|
||||
for tag in tags:
|
||||
if all(v in tag.attrs.get(k, []) for k, v in attrs.items()):
|
||||
return # we found every attr in this tag; done
|
||||
# we found every attr in this tag; done
|
||||
return tag
|
||||
testcase.fail(
|
||||
u"No <{}> tags contain attributes: {}".format(tag_name, attrs)
|
||||
)
|
||||
|
@ -1,6 +1,16 @@
|
||||
"""
|
||||
Tests for ``allmydata.web.common``.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import gc
|
||||
|
||||
@ -160,10 +170,10 @@ class RenderExceptionTests(SyncTestCase):
|
||||
MatchesPredicate(
|
||||
lambda value: assert_soup_has_tag_with_attributes(
|
||||
self,
|
||||
BeautifulSoup(value),
|
||||
BeautifulSoup(value, 'html5lib'),
|
||||
"meta",
|
||||
{"http-equiv": "refresh",
|
||||
"content": "0;URL={}".format(loc.encode("ascii")),
|
||||
"content": "0;URL={}".format(loc),
|
||||
},
|
||||
)
|
||||
# The assertion will raise if it has a problem, otherwise
|
||||
|
@ -1,6 +1,17 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import print_function
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os.path, re, urllib
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import os.path, re
|
||||
from urllib.parse import quote as url_quote
|
||||
import json
|
||||
from six.moves import StringIO
|
||||
|
||||
@ -37,7 +48,7 @@ DIR_HTML_TAG = '<html lang="en">'
|
||||
class CompletelyUnhandledError(Exception):
|
||||
pass
|
||||
|
||||
class ErrorBoom(object, resource.Resource):
|
||||
class ErrorBoom(resource.Resource, object):
|
||||
@render_exception
|
||||
def render(self, req):
|
||||
raise CompletelyUnhandledError("whoops")
|
||||
@ -47,32 +58,38 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def CHECK(self, ign, which, args, clientnum=0):
|
||||
fileurl = self.fileurls[which]
|
||||
url = fileurl + "?" + args
|
||||
return self.GET(url, method="POST", clientnum=clientnum)
|
||||
return self.GET_unicode(url, method="POST", clientnum=clientnum)
|
||||
|
||||
def GET_unicode(self, *args, **kwargs):
|
||||
"""Send an HTTP request, but convert result to Unicode string."""
|
||||
d = GridTestMixin.GET(self, *args, **kwargs)
|
||||
d.addCallback(str, "utf-8")
|
||||
return d
|
||||
|
||||
def test_filecheck(self):
|
||||
self.basedir = "web/Grid/filecheck"
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"1", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"1", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"2", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"2", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "dead")
|
||||
def _stash_mutable_uri(n, which):
|
||||
self.uris[which] = n.get_uri()
|
||||
assert isinstance(self.uris[which], str)
|
||||
assert isinstance(self.uris[which], bytes)
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData(DATA+"3")))
|
||||
c0.create_mutable_file(publish.MutableData(DATA+b"3")))
|
||||
d.addCallback(_stash_mutable_uri, "corrupt")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data("literal", convergence="")))
|
||||
c0.upload(upload.Data(b"literal", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
d.addCallback(lambda ign: c0.create_immutable_dirnode({}))
|
||||
d.addCallback(_stash_mutable_uri, "smalldir")
|
||||
@ -80,7 +97,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
def _clobber_shares(ignored):
|
||||
@ -203,28 +220,28 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"1", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"1", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"2", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"2", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "dead")
|
||||
def _stash_mutable_uri(n, which):
|
||||
self.uris[which] = n.get_uri()
|
||||
assert isinstance(self.uris[which], str)
|
||||
assert isinstance(self.uris[which], bytes)
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData(DATA+"3")))
|
||||
c0.create_mutable_file(publish.MutableData(DATA+b"3")))
|
||||
d.addCallback(_stash_mutable_uri, "corrupt")
|
||||
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
def _clobber_shares(ignored):
|
||||
@ -286,8 +303,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA+"1", convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA+b"1", convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
@ -295,7 +312,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
def _clobber_shares(ignored):
|
||||
@ -329,7 +346,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.fileurls = {}
|
||||
|
||||
# the future cap format may contain slashes, which must be tolerated
|
||||
expected_info_url = "uri/%s?t=info" % urllib.quote(unknown_rwcap,
|
||||
expected_info_url = "uri/%s?t=info" % url_quote(unknown_rwcap,
|
||||
safe="")
|
||||
|
||||
if immutable:
|
||||
@ -343,8 +360,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.rooturl = "uri/" + urllib.quote(n.get_uri())
|
||||
self.rourl = "uri/" + urllib.quote(n.get_readonly_uri())
|
||||
self.rooturl = "uri/" + url_quote(n.get_uri())
|
||||
self.rourl = "uri/" + url_quote(n.get_readonly_uri())
|
||||
if not immutable:
|
||||
return self.rootnode.set_node(name, future_node)
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
@ -352,18 +369,19 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
# make sure directory listing tolerates unknown nodes
|
||||
d.addCallback(lambda ign: self.GET(self.rooturl))
|
||||
def _check_directory_html(res, expected_type_suffix):
|
||||
pattern = re.compile(r'<td>\?%s</td>[ \t\n\r]*'
|
||||
'<td>%s</td>' % (expected_type_suffix, str(name)),
|
||||
pattern = re.compile(br'<td>\?%s</td>[ \t\n\r]*'
|
||||
b'<td>%s</td>' % (
|
||||
expected_type_suffix, name.encode("ascii")),
|
||||
re.DOTALL)
|
||||
self.failUnless(re.search(pattern, res), res)
|
||||
# find the More Info link for name, should be relative
|
||||
mo = re.search(r'<a href="([^"]+)">More Info</a>', res)
|
||||
mo = re.search(br'<a href="([^"]+)">More Info</a>', res)
|
||||
info_url = mo.group(1)
|
||||
self.failUnlessReallyEqual(info_url, "%s?t=info" % (str(name),))
|
||||
self.failUnlessReallyEqual(info_url, b"%s?t=info" % (name.encode("ascii"),))
|
||||
if immutable:
|
||||
d.addCallback(_check_directory_html, "-IMM")
|
||||
d.addCallback(_check_directory_html, b"-IMM")
|
||||
else:
|
||||
d.addCallback(_check_directory_html, "")
|
||||
d.addCallback(_check_directory_html, b"")
|
||||
|
||||
d.addCallback(lambda ign: self.GET(self.rooturl+"?t=json"))
|
||||
def _check_directory_json(res, expect_rw_uri):
|
||||
@ -383,7 +401,6 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_check_directory_json, expect_rw_uri=not immutable)
|
||||
|
||||
def _check_info(res, expect_rw_uri, expect_ro_uri):
|
||||
self.failUnlessIn("Object Type: <span>unknown</span>", res)
|
||||
if expect_rw_uri:
|
||||
self.failUnlessIn(unknown_rwcap, res)
|
||||
if expect_ro_uri:
|
||||
@ -393,6 +410,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.failUnlessIn(unknown_rocap, res)
|
||||
else:
|
||||
self.failIfIn(unknown_rocap, res)
|
||||
res = str(res, "utf-8")
|
||||
self.failUnlessIn("Object Type: <span>unknown</span>", res)
|
||||
self.failIfIn("Raw data as", res)
|
||||
self.failIfIn("Directory writecap", res)
|
||||
self.failIfIn("Checker Operations", res)
|
||||
@ -404,7 +423,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
|
||||
d.addCallback(lambda ign: self.GET(expected_info_url))
|
||||
d.addCallback(_check_info, expect_rw_uri=False, expect_ro_uri=False)
|
||||
d.addCallback(lambda ign: self.GET("%s/%s?t=info" % (self.rooturl, str(name))))
|
||||
d.addCallback(lambda ign: self.GET("%s/%s?t=info" % (self.rooturl, name)))
|
||||
d.addCallback(_check_info, expect_rw_uri=False, expect_ro_uri=True)
|
||||
|
||||
def _check_json(res, expect_rw_uri):
|
||||
@ -436,9 +455,9 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
# or not future_node was immutable.
|
||||
d.addCallback(lambda ign: self.GET(self.rourl))
|
||||
if immutable:
|
||||
d.addCallback(_check_directory_html, "-IMM")
|
||||
d.addCallback(_check_directory_html, b"-IMM")
|
||||
else:
|
||||
d.addCallback(_check_directory_html, "-RO")
|
||||
d.addCallback(_check_directory_html, b"-RO")
|
||||
|
||||
d.addCallback(lambda ign: self.GET(self.rourl+"?t=json"))
|
||||
d.addCallback(_check_directory_json, expect_rw_uri=False)
|
||||
@ -462,9 +481,9 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
|
||||
lonely_uri = "URI:LIT:n5xgk" # LIT for "one"
|
||||
mut_write_uri = "URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mut_read_uri = "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
lonely_uri = b"URI:LIT:n5xgk" # LIT for "one"
|
||||
mut_write_uri = b"URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mut_read_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
|
||||
# This method tests mainly dirnode, but we'd have to duplicate code in order to
|
||||
# test the dirnode and web layers separately.
|
||||
@ -507,10 +526,10 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
rep = str(dn)
|
||||
self.failUnlessIn("RO-IMM", rep)
|
||||
cap = dn.get_cap()
|
||||
self.failUnlessIn("CHK", cap.to_string())
|
||||
self.failUnlessIn(b"CHK", cap.to_string())
|
||||
self.cap = cap
|
||||
self.rootnode = dn
|
||||
self.rooturl = "uri/" + urllib.quote(dn.get_uri())
|
||||
self.rooturl = "uri/" + url_quote(dn.get_uri())
|
||||
return download_to_data(dn._node)
|
||||
d.addCallback(_created)
|
||||
|
||||
@ -526,7 +545,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
entry = entries[0]
|
||||
(name_utf8, ro_uri, rwcapdata, metadata_s), subpos = split_netstring(entry, 4)
|
||||
name = name_utf8.decode("utf-8")
|
||||
self.failUnlessEqual(rwcapdata, "")
|
||||
self.failUnlessEqual(rwcapdata, b"")
|
||||
self.failUnlessIn(name, kids)
|
||||
(expected_child, ign) = kids[name]
|
||||
self.failUnlessReallyEqual(ro_uri, expected_child.get_readonly_uri())
|
||||
@ -553,13 +572,13 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(lambda ign: self.GET(self.rooturl))
|
||||
def _check_html(res):
|
||||
soup = BeautifulSoup(res, 'html5lib')
|
||||
self.failIfIn("URI:SSK", res)
|
||||
self.failIfIn(b"URI:SSK", res)
|
||||
found = False
|
||||
for td in soup.find_all(u"td"):
|
||||
if td.text != u"FILE":
|
||||
continue
|
||||
a = td.findNextSibling()(u"a")[0]
|
||||
self.assertIn(urllib.quote(lonely_uri), a[u"href"])
|
||||
self.assertIn(url_quote(lonely_uri), a[u"href"])
|
||||
self.assertEqual(u"lonely", a.text)
|
||||
self.assertEqual(a[u"rel"], [u"noreferrer"])
|
||||
self.assertEqual(u"{}".format(len("one")), td.findNextSibling().findNextSibling().text)
|
||||
@ -573,7 +592,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
if a.text == u"More Info"
|
||||
)
|
||||
self.assertEqual(1, len(infos))
|
||||
self.assertTrue(infos[0].endswith(urllib.quote(lonely_uri) + "?t=info"))
|
||||
self.assertTrue(infos[0].endswith(url_quote(lonely_uri) + "?t=info"))
|
||||
d.addCallback(_check_html)
|
||||
|
||||
# ... and in JSON.
|
||||
@ -596,12 +615,12 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=""))
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=b""))
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
def _stash_uri(fn, which):
|
||||
self.uris[which] = fn.get_uri()
|
||||
@ -609,13 +628,13 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"small",
|
||||
upload.Data("literal",
|
||||
convergence="")))
|
||||
upload.Data(b"literal",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"sick",
|
||||
upload.Data(DATA+"1",
|
||||
convergence="")))
|
||||
upload.Data(DATA+b"1",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
|
||||
# this tests that deep-check and stream-manifest will ignore
|
||||
@ -695,13 +714,13 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_stash_uri, "subdir")
|
||||
d.addCallback(lambda subdir_node:
|
||||
subdir_node.add_file(u"grandchild",
|
||||
upload.Data(DATA+"2",
|
||||
convergence="")))
|
||||
upload.Data(DATA+b"2",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "grandchild")
|
||||
|
||||
d.addCallback(lambda ign:
|
||||
self.delete_shares_numbered(self.uris["subdir"],
|
||||
range(1, 10)))
|
||||
list(range(1, 10))))
|
||||
|
||||
# root
|
||||
# root/good
|
||||
@ -770,30 +789,30 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=""))
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=b""))
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
def _stash_uri(fn, which):
|
||||
self.uris[which] = fn.get_uri()
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"small",
|
||||
upload.Data("literal",
|
||||
convergence="")))
|
||||
upload.Data(b"literal",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"sick",
|
||||
upload.Data(DATA+"1",
|
||||
convergence="")))
|
||||
upload.Data(DATA+b"1",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
#d.addCallback(lambda ign:
|
||||
# self.rootnode.add_file(u"dead",
|
||||
# upload.Data(DATA+"2",
|
||||
# convergence="")))
|
||||
# upload.Data(DATA+b"2",
|
||||
# convergence=b"")))
|
||||
#d.addCallback(_stash_uri, "dead")
|
||||
|
||||
#d.addCallback(lambda ign: c0.create_mutable_file("mutable"))
|
||||
@ -888,25 +907,25 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.set_up_grid(num_clients=2, oneshare=True)
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "one")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"1", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"1", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "two")
|
||||
def _stash_mutable_uri(n, which):
|
||||
self.uris[which] = n.get_uri()
|
||||
assert isinstance(self.uris[which], str)
|
||||
assert isinstance(self.uris[which], bytes)
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData(DATA+"2")))
|
||||
c0.create_mutable_file(publish.MutableData(DATA+b"2")))
|
||||
d.addCallback(_stash_mutable_uri, "mutable")
|
||||
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
d.addCallback(self._count_leases, "one")
|
||||
@ -982,25 +1001,25 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.uris["root"] = n.get_uri()
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
return n.add_file(u"one", upload.Data(DATA, convergence=""))
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
return n.add_file(u"one", upload.Data(DATA, convergence=b""))
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
def _stash_uri(fn, which):
|
||||
self.uris[which] = fn.get_uri()
|
||||
d.addCallback(_stash_uri, "one")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"small",
|
||||
upload.Data("literal",
|
||||
convergence="")))
|
||||
upload.Data(b"literal",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData("mutable")))
|
||||
c0.create_mutable_file(publish.MutableData(b"mutable")))
|
||||
d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
|
||||
d.addCallback(_stash_uri, "mutable")
|
||||
|
||||
@ -1051,36 +1070,36 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
c0.encoding_params['happy'] = 2
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root(n):
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
self.fileurls["imaginary"] = self.fileurls["root"] + "/imaginary"
|
||||
return n
|
||||
d.addCallback(_stash_root)
|
||||
d.addCallback(lambda ign: c0.upload(upload.Data(DATA, convergence="")))
|
||||
d.addCallback(lambda ign: c0.upload(upload.Data(DATA, convergence=b"")))
|
||||
def _stash_bad(ur):
|
||||
self.fileurls["1share"] = "uri/" + urllib.quote(ur.get_uri())
|
||||
self.delete_shares_numbered(ur.get_uri(), range(1,10))
|
||||
self.fileurls["1share"] = "uri/" + url_quote(ur.get_uri())
|
||||
self.delete_shares_numbered(ur.get_uri(), list(range(1,10)))
|
||||
|
||||
u = uri.from_string(ur.get_uri())
|
||||
u.key = testutil.flip_bit(u.key, 0)
|
||||
baduri = u.to_string()
|
||||
self.fileurls["0shares"] = "uri/" + urllib.quote(baduri)
|
||||
self.fileurls["0shares"] = "uri/" + url_quote(baduri)
|
||||
d.addCallback(_stash_bad)
|
||||
d.addCallback(lambda ign: c0.create_dirnode())
|
||||
def _mangle_dirnode_1share(n):
|
||||
u = n.get_uri()
|
||||
url = self.fileurls["dir-1share"] = "uri/" + urllib.quote(u)
|
||||
url = self.fileurls["dir-1share"] = "uri/" + url_quote(u)
|
||||
self.fileurls["dir-1share-json"] = url + "?t=json"
|
||||
self.delete_shares_numbered(u, range(1,10))
|
||||
self.delete_shares_numbered(u, list(range(1,10)))
|
||||
d.addCallback(_mangle_dirnode_1share)
|
||||
d.addCallback(lambda ign: c0.create_dirnode())
|
||||
def _mangle_dirnode_0share(n):
|
||||
u = n.get_uri()
|
||||
url = self.fileurls["dir-0share"] = "uri/" + urllib.quote(u)
|
||||
url = self.fileurls["dir-0share"] = "uri/" + url_quote(u)
|
||||
self.fileurls["dir-0share-json"] = url + "?t=json"
|
||||
self.delete_shares_numbered(u, range(0,10))
|
||||
self.delete_shares_numbered(u, list(range(0,10)))
|
||||
d.addCallback(_mangle_dirnode_0share)
|
||||
|
||||
# NotEnoughSharesError should be reported sensibly, with a
|
||||
@ -1092,6 +1111,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
410, "Gone", "NoSharesError",
|
||||
self.GET, self.fileurls["0shares"]))
|
||||
def _check_zero_shares(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
body = " ".join(body.strip().split())
|
||||
exp = ("NoSharesError: no shares could be found. "
|
||||
@ -1100,7 +1120,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
"severe corruption. You should perform a filecheck on "
|
||||
"this object to learn more. The full error message is: "
|
||||
"no shares (need 3). Last failure: None")
|
||||
self.failUnlessReallyEqual(exp, body)
|
||||
self.assertEqual(exp, body)
|
||||
d.addCallback(_check_zero_shares)
|
||||
|
||||
|
||||
@ -1109,6 +1129,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
410, "Gone", "NotEnoughSharesError",
|
||||
self.GET, self.fileurls["1share"]))
|
||||
def _check_one_share(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
body = " ".join(body.strip().split())
|
||||
msgbase = ("NotEnoughSharesError: This indicates that some "
|
||||
@ -1133,10 +1154,11 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
404, "Not Found", None,
|
||||
self.GET, self.fileurls["imaginary"]))
|
||||
def _missing_child(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failUnlessIn("No such child: imaginary", body)
|
||||
d.addCallback(_missing_child)
|
||||
|
||||
d.addCallback(lambda ignored: self.GET(self.fileurls["dir-0share"]))
|
||||
d.addCallback(lambda ignored: self.GET_unicode(self.fileurls["dir-0share"]))
|
||||
def _check_0shares_dir_html(body):
|
||||
self.failUnlessIn(DIR_HTML_TAG, body)
|
||||
# we should see the regular page, but without the child table or
|
||||
@ -1155,7 +1177,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.failUnlessIn("No upload forms: directory is unreadable", body)
|
||||
d.addCallback(_check_0shares_dir_html)
|
||||
|
||||
d.addCallback(lambda ignored: self.GET(self.fileurls["dir-1share"]))
|
||||
d.addCallback(lambda ignored: self.GET_unicode(self.fileurls["dir-1share"]))
|
||||
def _check_1shares_dir_html(body):
|
||||
# at some point, we'll split UnrecoverableFileError into 0-shares
|
||||
# and some-shares like we did for immutable files (since there
|
||||
@ -1182,6 +1204,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET,
|
||||
self.fileurls["dir-0share-json"]))
|
||||
def _check_unrecoverable_file(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
body = " ".join(body.strip().split())
|
||||
exp = ("UnrecoverableFileError: the directory (or mutable file) "
|
||||
@ -1209,7 +1232,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
# attach a webapi child that throws a random error, to test how it
|
||||
# gets rendered.
|
||||
w = c0.getServiceNamed("webish")
|
||||
w.root.putChild("ERRORBOOM", ErrorBoom())
|
||||
w.root.putChild(b"ERRORBOOM", ErrorBoom())
|
||||
|
||||
# "Accept: */*" : should get a text/html stack trace
|
||||
# "Accept: text/plain" : should get a text/plain stack trace
|
||||
@ -1222,6 +1245,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, "ERRORBOOM",
|
||||
headers={"accept": "*/*"}))
|
||||
def _internal_error_html1(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failUnlessIn("<html>", "expected HTML, not '%s'" % body)
|
||||
d.addCallback(_internal_error_html1)
|
||||
|
||||
@ -1231,6 +1255,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, "ERRORBOOM",
|
||||
headers={"accept": "text/plain"}))
|
||||
def _internal_error_text2(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
self.failUnless(body.startswith("Traceback "), body)
|
||||
d.addCallback(_internal_error_text2)
|
||||
@ -1242,6 +1267,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, "ERRORBOOM",
|
||||
headers={"accept": CLI_accepts}))
|
||||
def _internal_error_text3(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
self.failUnless(body.startswith("Traceback "), body)
|
||||
d.addCallback(_internal_error_text3)
|
||||
@ -1251,7 +1277,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
500, "Internal Server Error", None,
|
||||
self.GET, "ERRORBOOM"))
|
||||
def _internal_error_html4(body):
|
||||
self.failUnlessIn("<html>", body)
|
||||
self.failUnlessIn(b"<html>", body)
|
||||
d.addCallback(_internal_error_html4)
|
||||
|
||||
def _flush_errors(res):
|
||||
@ -1269,12 +1295,12 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
fn = c0.config.get_config_path("access.blacklist")
|
||||
self.uris = {}
|
||||
DATA = "off-limits " * 50
|
||||
DATA = b"off-limits " * 50
|
||||
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri_and_create_dir(ur):
|
||||
self.uri = ur.get_uri()
|
||||
self.url = "uri/"+self.uri
|
||||
self.url = b"uri/"+self.uri
|
||||
u = uri.from_string_filenode(self.uri)
|
||||
self.si = u.get_storage_index()
|
||||
childnode = c0.create_node_from_uri(self.uri, None)
|
||||
@ -1283,9 +1309,9 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def _stash_dir(node):
|
||||
self.dir_node = node
|
||||
self.dir_uri = node.get_uri()
|
||||
self.dir_url = "uri/"+self.dir_uri
|
||||
self.dir_url = b"uri/"+self.dir_uri
|
||||
d.addCallback(_stash_dir)
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url, followRedirect=True))
|
||||
d.addCallback(lambda ign: self.GET_unicode(self.dir_url, followRedirect=True))
|
||||
def _check_dir_html(body):
|
||||
self.failUnlessIn(DIR_HTML_TAG, body)
|
||||
self.failUnlessIn("blacklisted.txt</a>", body)
|
||||
@ -1298,7 +1324,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
f.write(" # this is a comment\n")
|
||||
f.write(" \n")
|
||||
f.write("\n") # also exercise blank lines
|
||||
f.write("%s %s\n" % (base32.b2a(self.si), "off-limits to you"))
|
||||
f.write("%s off-limits to you\n" % (str(base32.b2a(self.si), "ascii"),))
|
||||
f.close()
|
||||
# clients should be checking the blacklist each time, so we don't
|
||||
# need to restart the client
|
||||
@ -1309,14 +1335,14 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, self.url))
|
||||
|
||||
# We should still be able to list the parent directory, in HTML...
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url, followRedirect=True))
|
||||
d.addCallback(lambda ign: self.GET_unicode(self.dir_url, followRedirect=True))
|
||||
def _check_dir_html2(body):
|
||||
self.failUnlessIn(DIR_HTML_TAG, body)
|
||||
self.failUnlessIn("blacklisted.txt</strike>", body)
|
||||
d.addCallback(_check_dir_html2)
|
||||
|
||||
# ... and in JSON (used by CLI).
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url+"?t=json", followRedirect=True))
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url+b"?t=json", followRedirect=True))
|
||||
def _check_dir_json(res):
|
||||
data = json.loads(res)
|
||||
self.failUnless(isinstance(data, list), data)
|
||||
@ -1355,14 +1381,14 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_add_dir)
|
||||
def _get_dircap(dn):
|
||||
self.dir_si_b32 = base32.b2a(dn.get_storage_index())
|
||||
self.dir_url_base = "uri/"+dn.get_write_uri()
|
||||
self.dir_url_json1 = "uri/"+dn.get_write_uri()+"?t=json"
|
||||
self.dir_url_json2 = "uri/"+dn.get_write_uri()+"?t=json"
|
||||
self.dir_url_json_ro = "uri/"+dn.get_readonly_uri()+"?t=json"
|
||||
self.child_url = "uri/"+dn.get_readonly_uri()+"/child"
|
||||
self.dir_url_base = b"uri/"+dn.get_write_uri()
|
||||
self.dir_url_json1 = b"uri/"+dn.get_write_uri()+b"?t=json"
|
||||
self.dir_url_json2 = b"uri/"+dn.get_write_uri()+b"?t=json"
|
||||
self.dir_url_json_ro = b"uri/"+dn.get_readonly_uri()+b"?t=json"
|
||||
self.child_url = b"uri/"+dn.get_readonly_uri()+b"/child"
|
||||
d.addCallback(_get_dircap)
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url_base, followRedirect=True))
|
||||
d.addCallback(lambda body: self.failUnlessIn(DIR_HTML_TAG, body))
|
||||
d.addCallback(lambda body: self.failUnlessIn(DIR_HTML_TAG, str(body, "utf-8")))
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url_json1))
|
||||
d.addCallback(lambda res: json.loads(res)) # just check it decodes
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url_json2))
|
||||
@ -1373,8 +1399,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(lambda body: self.failUnlessEqual(DATA, body))
|
||||
|
||||
def _block_dir(ign):
|
||||
f = open(fn, "w")
|
||||
f.write("%s %s\n" % (self.dir_si_b32, "dir-off-limits to you"))
|
||||
f = open(fn, "wb")
|
||||
f.write(b"%s %s\n" % (self.dir_si_b32, b"dir-off-limits to you"))
|
||||
f.close()
|
||||
self.g.clients[0].blacklist.last_mtime -= 2.0
|
||||
d.addCallback(_block_dir)
|
||||
|
@ -1,7 +1,13 @@
|
||||
from mock import Mock
|
||||
|
||||
import time
|
||||
|
||||
from urllib import (
|
||||
quote,
|
||||
)
|
||||
|
||||
from bs4 import (
|
||||
BeautifulSoup,
|
||||
)
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.web.template import Tag
|
||||
from twisted.web.test.requesthelper import DummyRequest
|
||||
@ -16,6 +22,9 @@ from ...util.connection_status import ConnectionStatus
|
||||
from allmydata.web.root import URIHandler
|
||||
from allmydata.client import _Client
|
||||
|
||||
from .common import (
|
||||
assert_soup_has_tag_with_attributes,
|
||||
)
|
||||
from ..common_web import (
|
||||
render,
|
||||
)
|
||||
@ -30,28 +39,37 @@ class RenderSlashUri(unittest.TestCase):
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.client = Mock()
|
||||
self.client = object()
|
||||
self.res = URIHandler(self.client)
|
||||
|
||||
def test_valid(self):
|
||||
def test_valid_query_redirect(self):
|
||||
"""
|
||||
A valid capbility does not result in error
|
||||
A syntactically valid capability given in the ``uri`` query argument
|
||||
results in a redirect.
|
||||
"""
|
||||
query_args = {b"uri": [
|
||||
cap = (
|
||||
b"URI:CHK:nt2xxmrccp7sursd6yh2thhcky:"
|
||||
b"mukesarwdjxiyqsjinbfiiro6q7kgmmekocxfjcngh23oxwyxtzq:2:5:5874882"
|
||||
]}
|
||||
)
|
||||
query_args = {b"uri": [cap]}
|
||||
response_body = self.successResultOf(
|
||||
render(self.res, query_args),
|
||||
)
|
||||
self.assertNotEqual(
|
||||
response_body,
|
||||
"Invalid capability",
|
||||
soup = BeautifulSoup(response_body, 'html5lib')
|
||||
tag = assert_soup_has_tag_with_attributes(
|
||||
self,
|
||||
soup,
|
||||
u"meta",
|
||||
{u"http-equiv": "refresh"},
|
||||
)
|
||||
self.assertIn(
|
||||
quote(cap, safe=""),
|
||||
tag.attrs.get(u"content"),
|
||||
)
|
||||
|
||||
def test_invalid(self):
|
||||
"""
|
||||
A (trivially) invalid capbility is an error
|
||||
A syntactically invalid capbility results in an error.
|
||||
"""
|
||||
query_args = {b"uri": [b"not a capability"]}
|
||||
response_body = self.successResultOf(
|
||||
|
@ -1,6 +1,16 @@
|
||||
"""
|
||||
Tests for ```allmydata.web.status```.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from bs4 import BeautifulSoup
|
||||
from twisted.web.template import flattenString
|
||||
@ -143,12 +153,12 @@ class DownloadStatusElementTests(TrialTestCase):
|
||||
See if we can render the page almost fully.
|
||||
"""
|
||||
status = FakeDownloadStatus(
|
||||
"si-1", 123,
|
||||
["s-1", "s-2", "s-3"],
|
||||
{"s-1": "unknown problem"},
|
||||
{"s-1": [1], "s-2": [1,2], "s-3": [2,3]},
|
||||
b"si-1", 123,
|
||||
[b"s-1", b"s-2", b"s-3"],
|
||||
{b"s-1": "unknown problem"},
|
||||
{b"s-1": [1], b"s-2": [1,2], b"s-3": [2,3]},
|
||||
{"fetch_per_server":
|
||||
{"s-1": [1], "s-2": [2,3], "s-3": [3,2]}}
|
||||
{b"s-1": [1], b"s-2": [2,3], b"s-3": [3,2]}}
|
||||
)
|
||||
|
||||
result = self._render_download_status_element(status)
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from twisted.trial import unittest
|
||||
from allmydata.web import status, common
|
||||
from ..common import ShouldFailMixin
|
||||
|
@ -90,7 +90,7 @@ class FakeNodeMaker(NodeMaker):
|
||||
return FakeMutableFileNode(None, None,
|
||||
self.encoding_params, None,
|
||||
self.all_contents).init_from_cap(cap)
|
||||
def create_mutable_file(self, contents="", keysize=None,
|
||||
def create_mutable_file(self, contents=b"", keysize=None,
|
||||
version=SDMF_VERSION):
|
||||
n = FakeMutableFileNode(None, None, self.encoding_params, None,
|
||||
self.all_contents)
|
||||
@ -105,7 +105,7 @@ class FakeUploader(service.Service):
|
||||
d = uploadable.get_size()
|
||||
d.addCallback(lambda size: uploadable.read(size))
|
||||
def _got_data(datav):
|
||||
data = "".join(datav)
|
||||
data = b"".join(datav)
|
||||
n = create_chk_filenode(data, self.all_contents)
|
||||
ur = upload.UploadResults(file_size=len(data),
|
||||
ciphertext_fetched=0,
|
||||
@ -127,12 +127,12 @@ class FakeUploader(service.Service):
|
||||
|
||||
|
||||
def build_one_ds():
|
||||
ds = DownloadStatus("storage_index", 1234)
|
||||
ds = DownloadStatus(b"storage_index", 1234)
|
||||
now = time.time()
|
||||
|
||||
serverA = StubServer(hashutil.tagged_hash("foo", "serverid_a")[:20])
|
||||
serverB = StubServer(hashutil.tagged_hash("foo", "serverid_b")[:20])
|
||||
storage_index = hashutil.storage_index_hash("SI")
|
||||
serverA = StubServer(hashutil.tagged_hash(b"foo", b"serverid_a")[:20])
|
||||
serverB = StubServer(hashutil.tagged_hash(b"foo", b"serverid_b")[:20])
|
||||
storage_index = hashutil.storage_index_hash(b"SI")
|
||||
e0 = ds.add_segment_request(0, now)
|
||||
e0.activate(now+0.5)
|
||||
e0.deliver(now+1, 0, 100, 0.5) # when, start,len, decodetime
|
||||
@ -261,7 +261,7 @@ class FakeClient(_Client):
|
||||
# minimal subset
|
||||
service.MultiService.__init__(self)
|
||||
self.all_contents = {}
|
||||
self.nodeid = "fake_nodeid"
|
||||
self.nodeid = b"fake_nodeid"
|
||||
self.nickname = u"fake_nickname \u263A"
|
||||
self.introducer_furls = []
|
||||
self.introducer_clients = []
|
||||
@ -277,7 +277,7 @@ class FakeClient(_Client):
|
||||
# fake knowledge of another server
|
||||
self.storage_broker.test_add_server("other_nodeid",
|
||||
FakeDisplayableServer(
|
||||
serverid="other_nodeid", nickname=u"other_nickname \u263B", connected = True,
|
||||
serverid=b"other_nodeid", nickname=u"other_nickname \u263B", connected = True,
|
||||
last_connect_time = 10, last_loss_time = 20, last_rx_time = 30))
|
||||
self.storage_broker.test_add_server("disconnected_nodeid",
|
||||
FakeDisplayableServer(
|
||||
@ -746,7 +746,10 @@ class MultiFormatResourceTests(TrialTestCase):
|
||||
"<title>400 - Bad Format</title>", response_body,
|
||||
)
|
||||
self.assertIn(
|
||||
"Unknown t value: 'foo'", response_body,
|
||||
"Unknown t value:", response_body,
|
||||
)
|
||||
self.assertIn(
|
||||
"'foo'", response_body,
|
||||
)
|
||||
|
||||
|
||||
|
@ -31,8 +31,8 @@ class UnknownNode(object):
|
||||
|
||||
def __init__(self, given_rw_uri, given_ro_uri, deep_immutable=False,
|
||||
name=u"<unknown name>"):
|
||||
assert given_rw_uri is None or isinstance(given_rw_uri, str)
|
||||
assert given_ro_uri is None or isinstance(given_ro_uri, str)
|
||||
assert given_rw_uri is None or isinstance(given_rw_uri, bytes)
|
||||
assert given_ro_uri is None or isinstance(given_ro_uri, bytes)
|
||||
given_rw_uri = given_rw_uri or None
|
||||
given_ro_uri = given_ro_uri or None
|
||||
|
||||
@ -182,3 +182,11 @@ class UnknownNode(object):
|
||||
|
||||
def check_and_repair(self, monitor, verify, add_lease):
|
||||
return defer.succeed(None)
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, UnknownNode):
|
||||
return False
|
||||
return other.ro_uri == self.ro_uri and other.rw_uri == self.rw_uri
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
@ -34,6 +34,7 @@ PORTED_MODULES = [
|
||||
"allmydata.crypto.error",
|
||||
"allmydata.crypto.rsa",
|
||||
"allmydata.crypto.util",
|
||||
"allmydata.dirnode",
|
||||
"allmydata.hashtree",
|
||||
"allmydata.immutable.checker",
|
||||
"allmydata.immutable.downloader",
|
||||
@ -67,6 +68,7 @@ PORTED_MODULES = [
|
||||
"allmydata.mutable.retrieve",
|
||||
"allmydata.mutable.servermap",
|
||||
"allmydata.node",
|
||||
"allmydata.nodemaker",
|
||||
"allmydata.storage_client",
|
||||
"allmydata.storage.common",
|
||||
"allmydata.storage.crawler",
|
||||
@ -88,12 +90,14 @@ PORTED_MODULES = [
|
||||
"allmydata.util.connection_status",
|
||||
"allmydata.util.deferredutil",
|
||||
"allmydata.util.dictutil",
|
||||
"allmydata.util.eliotutil",
|
||||
"allmydata.util.encodingutil",
|
||||
"allmydata.util.fileutil",
|
||||
"allmydata.util.gcutil",
|
||||
"allmydata.util.happinessutil",
|
||||
"allmydata.util.hashutil",
|
||||
"allmydata.util.humanreadable",
|
||||
"allmydata.util.idlib",
|
||||
"allmydata.util.iputil",
|
||||
"allmydata.util.jsonbytes",
|
||||
"allmydata.util.log",
|
||||
@ -136,7 +140,9 @@ PORTED_TEST_MODULES = [
|
||||
"allmydata.test.test_crypto",
|
||||
"allmydata.test.test_deferredutil",
|
||||
"allmydata.test.test_dictutil",
|
||||
"allmydata.test.test_dirnode",
|
||||
"allmydata.test.test_download",
|
||||
"allmydata.test.test_eliotutil",
|
||||
"allmydata.test.test_encode",
|
||||
"allmydata.test.test_encodingutil",
|
||||
"allmydata.test.test_filenode",
|
||||
@ -148,6 +154,7 @@ PORTED_TEST_MODULES = [
|
||||
"allmydata.test.test_immutable",
|
||||
"allmydata.test.test_introducer",
|
||||
"allmydata.test.test_iputil",
|
||||
"allmydata.test.test_json_metadata",
|
||||
"allmydata.test.test_log",
|
||||
"allmydata.test.test_monitor",
|
||||
"allmydata.test.test_netstring",
|
||||
@ -162,8 +169,18 @@ PORTED_TEST_MODULES = [
|
||||
"allmydata.test.test_storage",
|
||||
"allmydata.test.test_storage_client",
|
||||
"allmydata.test.test_storage_web",
|
||||
|
||||
# Only partially ported, test_filesystem_with_cli_in_subprocess and
|
||||
# test_filesystem methods aren't ported yet, should be done once CLI and
|
||||
# web are ported respectively.
|
||||
"allmydata.test.test_system",
|
||||
|
||||
"allmydata.test.test_time_format",
|
||||
"allmydata.test.test_upload",
|
||||
"allmydata.test.test_uri",
|
||||
"allmydata.test.test_util",
|
||||
"allmydata.test.web.test_common",
|
||||
"allmydata.test.web.test_grid",
|
||||
"allmydata.test.web.test_util",
|
||||
"allmydata.test.web.test_status",
|
||||
]
|
||||
|
@ -133,6 +133,8 @@ def a2b(cs):
|
||||
"""
|
||||
@param cs the base-32 encoded data (as bytes)
|
||||
"""
|
||||
# Workaround Future newbytes issues by converting to real bytes on Python 2:
|
||||
cs = backwardscompat_bytes(cs)
|
||||
precondition(could_be_base32_encoded(cs), "cs is required to be possibly base32 encoded data.", cs=cs)
|
||||
precondition(isinstance(cs, bytes), cs)
|
||||
|
||||
@ -140,7 +142,9 @@ def a2b(cs):
|
||||
# Add padding back, to make Python's base64 module happy:
|
||||
while (len(cs) * 5) % 8 != 0:
|
||||
cs += b"="
|
||||
return base64.b32decode(cs)
|
||||
# Let newbytes come through and still work on Python 2, where the base64
|
||||
# module gets confused by them.
|
||||
return base64.b32decode(backwardscompat_bytes(cs))
|
||||
|
||||
|
||||
__all__ = ["b2a", "a2b", "b2a_or_none", "BASE32CHAR_3bits", "BASE32CHAR_1bits", "BASE32CHAR", "BASE32STR_anybytes", "could_be_base32_encoded"]
|
||||
|
@ -1,6 +1,12 @@
|
||||
"""
|
||||
Tools aimed at the interaction between Tahoe-LAFS implementation and Eliot.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from __future__ import (
|
||||
unicode_literals,
|
||||
@ -18,6 +24,11 @@ __all__ = [
|
||||
"validateSetMembership",
|
||||
]
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from six import ensure_text
|
||||
|
||||
from sys import (
|
||||
stdout,
|
||||
)
|
||||
@ -75,6 +86,9 @@ from twisted.internet.defer import (
|
||||
)
|
||||
from twisted.application.service import Service
|
||||
|
||||
from .jsonbytes import BytesJSONEncoder
|
||||
|
||||
|
||||
def validateInstanceOf(t):
|
||||
"""
|
||||
Return an Eliot validator that requires values to be instances of ``t``.
|
||||
@ -228,7 +242,7 @@ def _stdlib_logging_to_eliot_configuration(stdlib_logger, eliot_logger=None):
|
||||
|
||||
class _DestinationParser(object):
|
||||
def parse(self, description):
|
||||
description = description.decode(u"ascii")
|
||||
description = ensure_text(description)
|
||||
|
||||
try:
|
||||
kind, args = description.split(u":", 1)
|
||||
@ -291,7 +305,7 @@ class _DestinationParser(object):
|
||||
rotateLength=rotate_length,
|
||||
maxRotatedFiles=max_rotated_files,
|
||||
)
|
||||
return lambda reactor: FileDestination(get_file())
|
||||
return lambda reactor: FileDestination(get_file(), BytesJSONEncoder)
|
||||
|
||||
|
||||
_parse_destination_description = _DestinationParser().parse
|
||||
|
@ -2,11 +2,18 @@
|
||||
from __future__ import absolute_import, print_function, with_statement
|
||||
import os
|
||||
|
||||
from zope.interface import (
|
||||
implementer,
|
||||
)
|
||||
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
from twisted.internet.endpoints import clientFromString
|
||||
from twisted.internet.error import ConnectionRefusedError, ConnectError
|
||||
from twisted.application import service
|
||||
|
||||
from ..interfaces import (
|
||||
IAddressFamily,
|
||||
)
|
||||
|
||||
def create(reactor, config):
|
||||
"""
|
||||
@ -135,6 +142,7 @@ def create_config(reactor, cli_config):
|
||||
returnValue((tahoe_config_i2p, i2p_port, i2p_location))
|
||||
|
||||
|
||||
@implementer(IAddressFamily)
|
||||
class _Provider(service.MultiService):
|
||||
def __init__(self, config, reactor):
|
||||
service.MultiService.__init__(self)
|
||||
@ -160,7 +168,14 @@ class _Provider(service.MultiService):
|
||||
(privkeyfile, external_port, escaped_sam_port)
|
||||
return i2p_port
|
||||
|
||||
def get_i2p_handler(self):
|
||||
def get_client_endpoint(self):
|
||||
"""
|
||||
Get an ``IStreamClientEndpoint`` which will set up a connection to an I2P
|
||||
address.
|
||||
|
||||
If I2P is not enabled or the dependencies are not available, return
|
||||
``None`` instead.
|
||||
"""
|
||||
enabled = self._get_i2p_config("enabled", True, boolean=True)
|
||||
if not enabled:
|
||||
return None
|
||||
@ -188,6 +203,9 @@ class _Provider(service.MultiService):
|
||||
|
||||
return self._i2p.default(self._reactor, keyfile=keyfile)
|
||||
|
||||
# Backwards compatibility alias
|
||||
get_i2p_handler = get_client_endpoint
|
||||
|
||||
def check_dest_config(self):
|
||||
if self._get_i2p_config("dest", False, boolean=True):
|
||||
if not self._txi2p:
|
||||
|
@ -1,9 +1,29 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from six import ensure_text
|
||||
from foolscap import base32
|
||||
|
||||
|
||||
def nodeid_b2a(nodeid):
|
||||
# we display nodeids using the same base32 alphabet that Foolscap uses
|
||||
return base32.encode(nodeid)
|
||||
"""
|
||||
We display nodeids using the same base32 alphabet that Foolscap uses.
|
||||
|
||||
Returns a Unicode string.
|
||||
"""
|
||||
return ensure_text(base32.encode(nodeid))
|
||||
|
||||
def shortnodeid_b2a(nodeid):
|
||||
"""
|
||||
Short version of nodeid_b2a() output, Unicode string.
|
||||
"""
|
||||
return nodeid_b2a(nodeid)[:8]
|
||||
|
@ -16,6 +16,9 @@ if PY2:
|
||||
import weakref
|
||||
from twisted.internet import defer
|
||||
from foolscap.api import eventually
|
||||
from twisted.logger import (
|
||||
Logger,
|
||||
)
|
||||
|
||||
"""The idiom we use is for the observed object to offer a method named
|
||||
'when_something', which returns a deferred. That deferred will be fired when
|
||||
@ -97,7 +100,10 @@ class LazyOneShotObserverList(OneShotObserverList):
|
||||
self._fire(self._get_result())
|
||||
|
||||
class ObserverList(object):
|
||||
"""A simple class to distribute events to a number of subscribers."""
|
||||
"""
|
||||
Immediately distribute events to a number of subscribers.
|
||||
"""
|
||||
_logger = Logger()
|
||||
|
||||
def __init__(self):
|
||||
self._watchers = []
|
||||
@ -109,8 +115,11 @@ class ObserverList(object):
|
||||
self._watchers.remove(observer)
|
||||
|
||||
def notify(self, *args, **kwargs):
|
||||
for o in self._watchers:
|
||||
eventually(o, *args, **kwargs)
|
||||
for o in self._watchers[:]:
|
||||
try:
|
||||
o(*args, **kwargs)
|
||||
except Exception:
|
||||
self._logger.failure("While notifying {o!r}", o=o)
|
||||
|
||||
class EventStreamObserver(object):
|
||||
"""A simple class to distribute multiple events to a single subscriber.
|
||||
|
@ -2,6 +2,10 @@
|
||||
from __future__ import absolute_import, print_function, with_statement
|
||||
import os
|
||||
|
||||
from zope.interface import (
|
||||
implementer,
|
||||
)
|
||||
|
||||
from twisted.internet.defer import inlineCallbacks, returnValue
|
||||
from twisted.internet.endpoints import clientFromString, TCP4ServerEndpoint
|
||||
from twisted.internet.error import ConnectionRefusedError, ConnectError
|
||||
@ -9,25 +13,11 @@ from twisted.application import service
|
||||
|
||||
from .observer import OneShotObserverList
|
||||
from .iputil import allocate_tcp_port
|
||||
|
||||
|
||||
def create(reactor, config):
|
||||
"""
|
||||
Create a new _Provider service (this is an IService so must be
|
||||
hooked up to a parent or otherwise started).
|
||||
|
||||
If foolscap.connections.tor or txtorcon are not installed, then
|
||||
Provider.get_tor_handler() will return None. If tahoe.cfg wants
|
||||
to start an onion service too, then this `create()` method will
|
||||
throw a nice error (and startService will throw an ugly error).
|
||||
"""
|
||||
provider = _Provider(config, reactor)
|
||||
provider.check_onion_config()
|
||||
return provider
|
||||
|
||||
from ..interfaces import (
|
||||
IAddressFamily,
|
||||
)
|
||||
|
||||
def _import_tor():
|
||||
# this exists to be overridden by unit tests
|
||||
try:
|
||||
from foolscap.connections import tor
|
||||
return tor
|
||||
@ -41,6 +31,25 @@ def _import_txtorcon():
|
||||
except ImportError: # pragma: no cover
|
||||
return None
|
||||
|
||||
def create(reactor, config, import_tor=None, import_txtorcon=None):
|
||||
"""
|
||||
Create a new _Provider service (this is an IService so must be
|
||||
hooked up to a parent or otherwise started).
|
||||
|
||||
If foolscap.connections.tor or txtorcon are not installed, then
|
||||
Provider.get_tor_handler() will return None. If tahoe.cfg wants
|
||||
to start an onion service too, then this `create()` method will
|
||||
throw a nice error (and startService will throw an ugly error).
|
||||
"""
|
||||
if import_tor is None:
|
||||
import_tor = _import_tor
|
||||
if import_txtorcon is None:
|
||||
import_txtorcon = _import_txtorcon
|
||||
provider = _Provider(config, reactor, import_tor(), import_txtorcon())
|
||||
provider.check_onion_config()
|
||||
return provider
|
||||
|
||||
|
||||
def data_directory(private_dir):
|
||||
return os.path.join(private_dir, "tor-statedir")
|
||||
|
||||
@ -209,15 +218,16 @@ def create_config(reactor, cli_config):
|
||||
returnValue((tahoe_config_tor, tor_port, tor_location))
|
||||
|
||||
|
||||
@implementer(IAddressFamily)
|
||||
class _Provider(service.MultiService):
|
||||
def __init__(self, config, reactor):
|
||||
def __init__(self, config, reactor, tor, txtorcon):
|
||||
service.MultiService.__init__(self)
|
||||
self._config = config
|
||||
self._tor_launched = None
|
||||
self._onion_ehs = None
|
||||
self._onion_tor_control_proto = None
|
||||
self._tor = _import_tor()
|
||||
self._txtorcon = _import_txtorcon()
|
||||
self._tor = tor
|
||||
self._txtorcon = txtorcon
|
||||
self._reactor = reactor
|
||||
|
||||
def _get_tor_config(self, *args, **kwargs):
|
||||
@ -228,7 +238,13 @@ class _Provider(service.MultiService):
|
||||
ep = TCP4ServerEndpoint(self._reactor, local_port, interface="127.0.0.1")
|
||||
return ep
|
||||
|
||||
def get_tor_handler(self):
|
||||
def get_client_endpoint(self):
|
||||
"""
|
||||
Get an ``IStreamClientEndpoint`` which will set up a connection using Tor.
|
||||
|
||||
If Tor is not enabled or the dependencies are not available, return
|
||||
``None`` instead.
|
||||
"""
|
||||
enabled = self._get_tor_config("enabled", True, boolean=True)
|
||||
if not enabled:
|
||||
return None
|
||||
@ -253,6 +269,9 @@ class _Provider(service.MultiService):
|
||||
|
||||
return self._tor.default_socks()
|
||||
|
||||
# Backwards compatibility alias
|
||||
get_tor_handler = get_client_endpoint
|
||||
|
||||
@inlineCallbacks
|
||||
def _make_control_endpoint(self, reactor, update_status):
|
||||
# this will only be called when tahoe.cfg has "[tor] launch = true"
|
||||
|
@ -1,4 +1,5 @@
|
||||
from past.builtins import unicode
|
||||
from six import ensure_text, ensure_str
|
||||
|
||||
import time
|
||||
import json
|
||||
@ -99,17 +100,19 @@ def get_filenode_metadata(filenode):
|
||||
|
||||
def boolean_of_arg(arg):
|
||||
# TODO: ""
|
||||
arg = ensure_text(arg)
|
||||
if arg.lower() not in ("true", "t", "1", "false", "f", "0", "on", "off"):
|
||||
raise WebError("invalid boolean argument: %r" % (arg,), http.BAD_REQUEST)
|
||||
return arg.lower() in ("true", "t", "1", "on")
|
||||
|
||||
def parse_replace_arg(replace):
|
||||
replace = ensure_text(replace)
|
||||
if replace.lower() == "only-files":
|
||||
return replace
|
||||
try:
|
||||
return boolean_of_arg(replace)
|
||||
except WebError:
|
||||
raise WebError("invalid replace= argument: %r" % (replace,), http.BAD_REQUEST)
|
||||
raise WebError("invalid replace= argument: %r" % (ensure_str(replace),), http.BAD_REQUEST)
|
||||
|
||||
|
||||
def get_format(req, default="CHK"):
|
||||
@ -118,11 +121,11 @@ def get_format(req, default="CHK"):
|
||||
if boolean_of_arg(get_arg(req, "mutable", "false")):
|
||||
return "SDMF"
|
||||
return default
|
||||
if arg.upper() == "CHK":
|
||||
if arg.upper() == b"CHK":
|
||||
return "CHK"
|
||||
elif arg.upper() == "SDMF":
|
||||
elif arg.upper() == b"SDMF":
|
||||
return "SDMF"
|
||||
elif arg.upper() == "MDMF":
|
||||
elif arg.upper() == b"MDMF":
|
||||
return "MDMF"
|
||||
else:
|
||||
raise WebError("Unknown format: %s, I know CHK, SDMF, MDMF" % arg,
|
||||
@ -208,28 +211,44 @@ def compute_rate(bytes, seconds):
|
||||
return 1.0 * bytes / seconds
|
||||
|
||||
def abbreviate_rate(data):
|
||||
# 21.8kBps, 554.4kBps 4.37MBps
|
||||
"""
|
||||
Convert number of bytes/second into human readable strings (unicode).
|
||||
|
||||
Uses metric measures, so 1000 not 1024, e.g. 21.8kBps, 554.4kBps, 4.37MBps.
|
||||
|
||||
:param data: Either ``None`` or integer.
|
||||
|
||||
:return: Unicode string.
|
||||
"""
|
||||
if data is None:
|
||||
return ""
|
||||
return u""
|
||||
r = float(data)
|
||||
if r > 1000000:
|
||||
return "%1.2fMBps" % (r/1000000)
|
||||
return u"%1.2fMBps" % (r/1000000)
|
||||
if r > 1000:
|
||||
return "%.1fkBps" % (r/1000)
|
||||
return "%.0fBps" % r
|
||||
return u"%.1fkBps" % (r/1000)
|
||||
return u"%.0fBps" % r
|
||||
|
||||
def abbreviate_size(data):
|
||||
# 21.8kB, 554.4kB 4.37MB
|
||||
"""
|
||||
Convert number of bytes into human readable strings (unicode).
|
||||
|
||||
Uses metric measures, so 1000 not 1024, e.g. 21.8kB, 554.4kB, 4.37MB.
|
||||
|
||||
:param data: Either ``None`` or integer.
|
||||
|
||||
:return: Unicode string.
|
||||
"""
|
||||
if data is None:
|
||||
return ""
|
||||
return u""
|
||||
r = float(data)
|
||||
if r > 1000000000:
|
||||
return "%1.2fGB" % (r/1000000000)
|
||||
return u"%1.2fGB" % (r/1000000000)
|
||||
if r > 1000000:
|
||||
return "%1.2fMB" % (r/1000000)
|
||||
return u"%1.2fMB" % (r/1000000)
|
||||
if r > 1000:
|
||||
return "%.1fkB" % (r/1000)
|
||||
return "%.0fB" % r
|
||||
return u"%.1fkB" % (r/1000)
|
||||
return u"%.0fB" % r
|
||||
|
||||
def plural(sequence_or_length):
|
||||
if isinstance(sequence_or_length, int):
|
||||
@ -562,7 +581,7 @@ def _finish(result, render, request):
|
||||
Message.log(
|
||||
message_type=u"allmydata:web:common-render:DecodedURL",
|
||||
)
|
||||
_finish(redirectTo(str(result), request), render, request)
|
||||
_finish(redirectTo(result.to_text().encode("utf-8"), request), render, request)
|
||||
elif result is None:
|
||||
Message.log(
|
||||
message_type=u"allmydata:web:common-render:None",
|
||||
|
@ -4,6 +4,8 @@ Common utilities that are available from Python 3.
|
||||
Can eventually be merged back into allmydata.web.common.
|
||||
"""
|
||||
|
||||
from past.builtins import unicode
|
||||
|
||||
from twisted.web import resource, http
|
||||
|
||||
from allmydata.util import abbreviate
|
||||
@ -23,7 +25,13 @@ def get_arg(req, argname, default=None, multiple=False):
|
||||
empty), starting with all those in the query args.
|
||||
|
||||
:param TahoeLAFSRequest req: The request to consider.
|
||||
|
||||
:return: Either bytes or tuple of bytes.
|
||||
"""
|
||||
if isinstance(argname, unicode):
|
||||
argname = argname.encode("utf-8")
|
||||
if isinstance(default, unicode):
|
||||
default = default.encode("utf-8")
|
||||
results = []
|
||||
if argname in req.args:
|
||||
results.extend(req.args[argname])
|
||||
@ -62,6 +70,9 @@ class MultiFormatResource(resource.Resource, object):
|
||||
:return: The result of the selected renderer.
|
||||
"""
|
||||
t = get_arg(req, self.formatArgument, self.formatDefault)
|
||||
# It's either bytes or None.
|
||||
if isinstance(t, bytes):
|
||||
t = unicode(t, "ascii")
|
||||
renderer = self._get_renderer(t)
|
||||
return renderer(req)
|
||||
|
||||
@ -95,16 +106,23 @@ class MultiFormatResource(resource.Resource, object):
|
||||
|
||||
|
||||
def abbreviate_time(data):
|
||||
"""
|
||||
Convert number of seconds into human readable string.
|
||||
|
||||
:param data: Either ``None`` or integer or float, seconds.
|
||||
|
||||
:return: Unicode string.
|
||||
"""
|
||||
# 1.23s, 790ms, 132us
|
||||
if data is None:
|
||||
return ""
|
||||
return u""
|
||||
s = float(data)
|
||||
if s >= 10:
|
||||
return abbreviate.abbreviate_time(data)
|
||||
if s >= 1.0:
|
||||
return "%.2fs" % s
|
||||
return u"%.2fs" % s
|
||||
if s >= 0.01:
|
||||
return "%.0fms" % (1000*s)
|
||||
return u"%.0fms" % (1000*s)
|
||||
if s >= 0.001:
|
||||
return "%.1fms" % (1000*s)
|
||||
return "%.0fus" % (1000000*s)
|
||||
return u"%.1fms" % (1000*s)
|
||||
return u"%.0fus" % (1000000*s)
|
||||
|
@ -1,6 +1,6 @@
|
||||
from past.builtins import unicode
|
||||
|
||||
import json
|
||||
import urllib
|
||||
from urllib.parse import quote as url_quote
|
||||
from datetime import timedelta
|
||||
|
||||
from zope.interface import implementer
|
||||
@ -20,7 +20,7 @@ from twisted.web.template import (
|
||||
from hyperlink import URL
|
||||
from twisted.python.filepath import FilePath
|
||||
|
||||
from allmydata.util import base32
|
||||
from allmydata.util import base32, jsonbytes as json
|
||||
from allmydata.util.encodingutil import (
|
||||
to_bytes,
|
||||
quote_output,
|
||||
@ -109,7 +109,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
# or no further children) renders "this" page. We also need
|
||||
# to reject "/uri/URI:DIR2:..//", so we look at postpath.
|
||||
name = name.decode('utf8')
|
||||
if not name and req.postpath != ['']:
|
||||
if not name and req.postpath != [b'']:
|
||||
return self
|
||||
|
||||
# Rejecting URIs that contain empty path pieces (for example:
|
||||
@ -135,7 +135,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
terminal = (req.prepath + req.postpath)[-1].decode('utf8') == name
|
||||
nonterminal = not terminal #len(req.postpath) > 0
|
||||
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
if isinstance(node_or_failure, Failure):
|
||||
f = node_or_failure
|
||||
f.trap(NoSuchChildError)
|
||||
@ -217,7 +217,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
@render_exception
|
||||
def render_GET(self, req):
|
||||
# This is where all of the directory-related ?t=* code goes.
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, b"t", b"").strip(), "ascii")
|
||||
|
||||
# t=info contains variable ophandles, t=rename-form contains the name
|
||||
# of the child being renamed. Neither is allowed an ETag.
|
||||
@ -225,7 +225,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
if not self.node.is_mutable() and t in FIXED_OUTPUT_TYPES:
|
||||
si = self.node.get_storage_index()
|
||||
if si and req.setETag('DIR:%s-%s' % (base32.b2a(si), t or "")):
|
||||
return ""
|
||||
return b""
|
||||
|
||||
if not t:
|
||||
# render the directory as HTML
|
||||
@ -255,7 +255,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
|
||||
@render_exception
|
||||
def render_PUT(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = parse_replace_arg(get_arg(req, "replace", "true"))
|
||||
|
||||
if t == "mkdir":
|
||||
@ -275,7 +275,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
|
||||
@render_exception
|
||||
def render_POST(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, b"t", b"").strip(), "ascii")
|
||||
|
||||
if t == "mkdir":
|
||||
d = self._POST_mkdir(req)
|
||||
@ -732,7 +732,7 @@ class DirectoryAsHTML(Element):
|
||||
return ""
|
||||
rocap = self.node.get_readonly_uri()
|
||||
root = get_root(req)
|
||||
uri_link = "%s/uri/%s/" % (root, urllib.quote(rocap))
|
||||
uri_link = "%s/uri/%s/" % (root, url_quote(rocap))
|
||||
return tag(tags.a("Read-Only Version", href=uri_link))
|
||||
|
||||
@renderer
|
||||
@ -754,10 +754,10 @@ class DirectoryAsHTML(Element):
|
||||
called by the 'children' renderer)
|
||||
"""
|
||||
name = name.encode("utf-8")
|
||||
nameurl = urllib.quote(name, safe="") # encode any slashes too
|
||||
nameurl = url_quote(name, safe="") # encode any slashes too
|
||||
|
||||
root = get_root(req)
|
||||
here = "{}/uri/{}/".format(root, urllib.quote(self.node.get_uri()))
|
||||
here = "{}/uri/{}/".format(root, url_quote(self.node.get_uri()))
|
||||
if self.node.is_unknown() or self.node.is_readonly():
|
||||
unlink = "-"
|
||||
rename = "-"
|
||||
@ -814,7 +814,7 @@ class DirectoryAsHTML(Element):
|
||||
|
||||
assert IFilesystemNode.providedBy(target), target
|
||||
target_uri = target.get_uri() or ""
|
||||
quoted_uri = urllib.quote(target_uri, safe="") # escape slashes too
|
||||
quoted_uri = url_quote(target_uri, safe="") # escape slashes too
|
||||
|
||||
if IMutableFileNode.providedBy(target):
|
||||
# to prevent javascript in displayed .html files from stealing a
|
||||
@ -835,7 +835,7 @@ class DirectoryAsHTML(Element):
|
||||
|
||||
elif IDirectoryNode.providedBy(target):
|
||||
# directory
|
||||
uri_link = "%s/uri/%s/" % (root, urllib.quote(target_uri))
|
||||
uri_link = "%s/uri/%s/" % (root, url_quote(target_uri))
|
||||
slots["filename"] = tags.a(name, href=uri_link)
|
||||
if not target.is_mutable():
|
||||
dirtype = "DIR-IMM"
|
||||
@ -871,7 +871,7 @@ class DirectoryAsHTML(Element):
|
||||
slots["size"] = "-"
|
||||
# use a directory-relative info link, so we can extract both the
|
||||
# writecap and the readcap
|
||||
info_link = "%s?t=info" % urllib.quote(name)
|
||||
info_link = "%s?t=info" % url_quote(name)
|
||||
|
||||
if info_link:
|
||||
slots["info"] = tags.a("More Info", href=info_link)
|
||||
@ -888,7 +888,7 @@ class DirectoryAsHTML(Element):
|
||||
# because action="." doesn't get us back to the dir page (but
|
||||
# instead /uri itself)
|
||||
root = get_root(req)
|
||||
here = "{}/uri/{}/".format(root, urllib.quote(self.node.get_uri()))
|
||||
here = "{}/uri/{}/".format(root, url_quote(self.node.get_uri()))
|
||||
|
||||
if self.node.is_readonly():
|
||||
return tags.div("No upload forms: directory is read-only")
|
||||
@ -1005,7 +1005,7 @@ def _directory_json_metadata(req, dirnode):
|
||||
d = dirnode.list()
|
||||
def _got(children):
|
||||
kids = {}
|
||||
for name, (childnode, metadata) in children.iteritems():
|
||||
for name, (childnode, metadata) in children.items():
|
||||
assert IFilesystemNode.providedBy(childnode), childnode
|
||||
rw_uri = childnode.get_write_uri()
|
||||
ro_uri = childnode.get_readonly_uri()
|
||||
@ -1166,13 +1166,13 @@ def _cap_to_link(root, path, cap):
|
||||
if isinstance(cap_obj, (CHKFileURI, WriteableSSKFileURI, ReadonlySSKFileURI)):
|
||||
uri_link = root_url.child(
|
||||
u"file",
|
||||
u"{}".format(urllib.quote(cap)),
|
||||
u"{}".format(urllib.quote(path[-1])),
|
||||
u"{}".format(url_quote(cap)),
|
||||
u"{}".format(url_quote(path[-1])),
|
||||
)
|
||||
else:
|
||||
uri_link = root_url.child(
|
||||
u"uri",
|
||||
u"{}".format(urllib.quote(cap, safe="")),
|
||||
u"{}".format(url_quote(cap, safe="")),
|
||||
)
|
||||
return tags.a(cap, href=uri_link.to_text())
|
||||
else:
|
||||
@ -1363,7 +1363,7 @@ class ManifestStreamer(dirnode.DeepStats):
|
||||
|
||||
j = json.dumps(d, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
|
||||
def finish(self):
|
||||
stats = dirnode.DeepStats.get_results(self)
|
||||
@ -1372,8 +1372,8 @@ class ManifestStreamer(dirnode.DeepStats):
|
||||
}
|
||||
j = json.dumps(d, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
return ""
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
return b""
|
||||
|
||||
@implementer(IPushProducer)
|
||||
class DeepCheckStreamer(dirnode.DeepStats):
|
||||
@ -1441,7 +1441,7 @@ class DeepCheckStreamer(dirnode.DeepStats):
|
||||
def write_line(self, data):
|
||||
j = json.dumps(data, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
|
||||
def finish(self):
|
||||
stats = dirnode.DeepStats.get_results(self)
|
||||
@ -1450,8 +1450,8 @@ class DeepCheckStreamer(dirnode.DeepStats):
|
||||
}
|
||||
j = json.dumps(d, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
return ""
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
return b""
|
||||
|
||||
|
||||
class UnknownNodeHandler(Resource, object):
|
||||
@ -1464,7 +1464,7 @@ class UnknownNodeHandler(Resource, object):
|
||||
|
||||
@render_exception
|
||||
def render_GET(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, "t", "").strip(), "ascii")
|
||||
if t == "info":
|
||||
return MoreInfo(self.node)
|
||||
if t == "json":
|
||||
|
@ -1,5 +1,4 @@
|
||||
|
||||
import json
|
||||
from past.builtins import unicode, long
|
||||
|
||||
from twisted.web import http, static
|
||||
from twisted.internet import defer
|
||||
@ -41,6 +40,8 @@ from allmydata.web.check_results import (
|
||||
LiteralCheckResultsRenderer,
|
||||
)
|
||||
from allmydata.web.info import MoreInfo
|
||||
from allmydata.util import jsonbytes as json
|
||||
|
||||
|
||||
class ReplaceMeMixin(object):
|
||||
def replace_me_with_a_child(self, req, client, replace):
|
||||
@ -117,7 +118,7 @@ class PlaceHolderNodeHandler(Resource, ReplaceMeMixin):
|
||||
|
||||
@render_exception
|
||||
def render_PUT(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = parse_replace_arg(get_arg(req, "replace", "true"))
|
||||
|
||||
assert self.parentnode and self.name
|
||||
@ -133,9 +134,9 @@ class PlaceHolderNodeHandler(Resource, ReplaceMeMixin):
|
||||
|
||||
@render_exception
|
||||
def render_POST(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
replace = boolean_of_arg(get_arg(req, "replace", "true"))
|
||||
if t == "upload":
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = boolean_of_arg(get_arg(req, b"replace", b"true"))
|
||||
if t == b"upload":
|
||||
# like PUT, but get the file data from an HTML form's input field.
|
||||
# We could get here from POST /uri/mutablefilecap?t=upload,
|
||||
# or POST /uri/path/file?t=upload, or
|
||||
@ -179,7 +180,7 @@ class FileNodeHandler(Resource, ReplaceMeMixin, object):
|
||||
|
||||
@render_exception
|
||||
def render_GET(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, b"t", b"").strip(), "ascii")
|
||||
|
||||
# t=info contains variable ophandles, so is not allowed an ETag.
|
||||
FIXED_OUTPUT_TYPES = ["", "json", "uri", "readonly-uri"]
|
||||
@ -237,19 +238,19 @@ class FileNodeHandler(Resource, ReplaceMeMixin, object):
|
||||
|
||||
@render_exception
|
||||
def render_HEAD(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
if t:
|
||||
raise WebError("HEAD file: bad t=%s" % t)
|
||||
filename = get_arg(req, "filename", self.name) or "unknown"
|
||||
filename = get_arg(req, b"filename", self.name) or "unknown"
|
||||
d = self.node.get_best_readable_version()
|
||||
d.addCallback(lambda dn: FileDownloader(dn, filename))
|
||||
return d
|
||||
|
||||
@render_exception
|
||||
def render_PUT(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
replace = parse_replace_arg(get_arg(req, "replace", "true"))
|
||||
offset = parse_offset_arg(get_arg(req, "offset", None))
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = parse_replace_arg(get_arg(req, b"replace", b"true"))
|
||||
offset = parse_offset_arg(get_arg(req, b"offset", None))
|
||||
|
||||
if not t:
|
||||
if not replace:
|
||||
@ -290,11 +291,11 @@ class FileNodeHandler(Resource, ReplaceMeMixin, object):
|
||||
|
||||
@render_exception
|
||||
def render_POST(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
replace = boolean_of_arg(get_arg(req, "replace", "true"))
|
||||
if t == "check":
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = boolean_of_arg(get_arg(req, b"replace", b"true"))
|
||||
if t == b"check":
|
||||
d = self._POST_check(req)
|
||||
elif t == "upload":
|
||||
elif t == b"upload":
|
||||
# like PUT, but get the file data from an HTML form's input field
|
||||
# We could get here from POST /uri/mutablefilecap?t=upload,
|
||||
# or POST /uri/path/file?t=upload, or
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user