mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2024-12-19 13:07:56 +00:00
Merge branch 'master' into 3399.mypy
This commit is contained in:
commit
7d468cde19
7
CREDITS
7
CREDITS
@ -206,4 +206,9 @@ D: various bug-fixes and features
|
||||
|
||||
N: Viktoriia Savchuk
|
||||
W: https://twitter.com/viktoriiasvchk
|
||||
D: Developer community focused improvements on the README file.
|
||||
D: Developer community focused improvements on the README file.
|
||||
|
||||
N: Lukas Pirl
|
||||
E: tahoe@lukas-pirl.de
|
||||
W: http://lukas-pirl.de
|
||||
D: Buildslaves (Debian, Fedora, CentOS; 2016-2021)
|
||||
|
@ -67,12 +67,12 @@ Here's how it works:
|
||||
A "storage grid" is made up of a number of storage servers. A storage server
|
||||
has direct attached storage (typically one or more hard disks). A "gateway"
|
||||
communicates with storage nodes, and uses them to provide access to the
|
||||
grid over protocols such as HTTP(S), SFTP or FTP.
|
||||
grid over protocols such as HTTP(S) and SFTP.
|
||||
|
||||
Note that you can find "client" used to refer to gateway nodes (which act as
|
||||
a client to storage servers), and also to processes or programs connecting to
|
||||
a gateway node and performing operations on the grid -- for example, a CLI
|
||||
command, Web browser, SFTP client, or FTP client.
|
||||
command, Web browser, or SFTP client.
|
||||
|
||||
Users do not rely on storage servers to provide *confidentiality* nor
|
||||
*integrity* for their data -- instead all of the data is encrypted and
|
||||
|
@ -81,7 +81,6 @@ Client/server nodes provide one or more of the following services:
|
||||
|
||||
* web-API service
|
||||
* SFTP service
|
||||
* FTP service
|
||||
* helper service
|
||||
* storage service.
|
||||
|
||||
@ -708,12 +707,12 @@ CLI
|
||||
file store, uploading/downloading files, and creating/running Tahoe
|
||||
nodes. See :doc:`frontends/CLI` for details.
|
||||
|
||||
SFTP, FTP
|
||||
SFTP
|
||||
|
||||
Tahoe can also run both SFTP and FTP servers, and map a username/password
|
||||
Tahoe can also run SFTP servers, and map a username/password
|
||||
pair to a top-level Tahoe directory. See :doc:`frontends/FTP-and-SFTP`
|
||||
for instructions on configuring these services, and the ``[sftpd]`` and
|
||||
``[ftpd]`` sections of ``tahoe.cfg``.
|
||||
for instructions on configuring this service, and the ``[sftpd]``
|
||||
section of ``tahoe.cfg``.
|
||||
|
||||
|
||||
Storage Server Configuration
|
||||
|
@ -1,22 +1,21 @@
|
||||
.. -*- coding: utf-8-with-signature -*-
|
||||
|
||||
=================================
|
||||
Tahoe-LAFS SFTP and FTP Frontends
|
||||
=================================
|
||||
========================
|
||||
Tahoe-LAFS SFTP Frontend
|
||||
========================
|
||||
|
||||
1. `SFTP/FTP Background`_
|
||||
1. `SFTP Background`_
|
||||
2. `Tahoe-LAFS Support`_
|
||||
3. `Creating an Account File`_
|
||||
4. `Running An Account Server (accounts.url)`_
|
||||
5. `Configuring SFTP Access`_
|
||||
6. `Configuring FTP Access`_
|
||||
7. `Dependencies`_
|
||||
8. `Immutable and Mutable Files`_
|
||||
9. `Known Issues`_
|
||||
6. `Dependencies`_
|
||||
7. `Immutable and Mutable Files`_
|
||||
8. `Known Issues`_
|
||||
|
||||
|
||||
SFTP/FTP Background
|
||||
===================
|
||||
SFTP Background
|
||||
===============
|
||||
|
||||
FTP is the venerable internet file-transfer protocol, first developed in
|
||||
1971. The FTP server usually listens on port 21. A separate connection is
|
||||
@ -33,20 +32,18 @@ Both FTP and SFTP were developed assuming a UNIX-like server, with accounts
|
||||
and passwords, octal file modes (user/group/other, read/write/execute), and
|
||||
ctime/mtime timestamps.
|
||||
|
||||
We recommend SFTP over FTP, because the protocol is better, and the server
|
||||
implementation in Tahoe-LAFS is more complete. See `Known Issues`_, below,
|
||||
for details.
|
||||
Previous versions of Tahoe-LAFS supported FTP, but now only the superior SFTP
|
||||
frontend is supported. See `Known Issues`_, below, for details on the
|
||||
limitations of SFTP.
|
||||
|
||||
Tahoe-LAFS Support
|
||||
==================
|
||||
|
||||
All Tahoe-LAFS client nodes can run a frontend SFTP server, allowing regular
|
||||
SFTP clients (like ``/usr/bin/sftp``, the ``sshfs`` FUSE plugin, and many
|
||||
others) to access the file store. They can also run an FTP server, so FTP
|
||||
clients (like ``/usr/bin/ftp``, ``ncftp``, and others) can too. These
|
||||
frontends sit at the same level as the web-API interface.
|
||||
others) to access the file store.
|
||||
|
||||
Since Tahoe-LAFS does not use user accounts or passwords, the SFTP/FTP
|
||||
Since Tahoe-LAFS does not use user accounts or passwords, the SFTP
|
||||
servers must be configured with a way to first authenticate a user (confirm
|
||||
that a prospective client has a legitimate claim to whatever authorities we
|
||||
might grant a particular user), and second to decide what directory cap
|
||||
@ -173,39 +170,6 @@ clients and with the sshfs filesystem, see wiki:SftpFrontend_
|
||||
|
||||
.. _wiki:SftpFrontend: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend
|
||||
|
||||
Configuring FTP Access
|
||||
======================
|
||||
|
||||
To enable the FTP server with an accounts file, add the following lines to
|
||||
the BASEDIR/tahoe.cfg file::
|
||||
|
||||
[ftpd]
|
||||
enabled = true
|
||||
port = tcp:8021:interface=127.0.0.1
|
||||
accounts.file = private/accounts
|
||||
|
||||
The FTP server will listen on the given port number and on the loopback
|
||||
interface only. The "accounts.file" pathname will be interpreted relative to
|
||||
the node's BASEDIR.
|
||||
|
||||
To enable the FTP server with an account server instead, provide the URL of
|
||||
that server in an "accounts.url" directive::
|
||||
|
||||
[ftpd]
|
||||
enabled = true
|
||||
port = tcp:8021:interface=127.0.0.1
|
||||
accounts.url = https://example.com/login
|
||||
|
||||
You can provide both accounts.file and accounts.url, although it probably
|
||||
isn't very useful except for testing.
|
||||
|
||||
FTP provides no security, and so your password or caps could be eavesdropped
|
||||
if you connect to the FTP server remotely. The examples above include
|
||||
":interface=127.0.0.1" in the "port" option, which causes the server to only
|
||||
accept connections from localhost.
|
||||
|
||||
Public key authentication is not supported for FTP.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
@ -216,7 +180,7 @@ separately: debian puts it in the "python-twisted-conch" package.
|
||||
Immutable and Mutable Files
|
||||
===========================
|
||||
|
||||
All files created via SFTP (and FTP) are immutable files. However, files can
|
||||
All files created via SFTP are immutable files. However, files can
|
||||
only be created in writeable directories, which allows the directory entry to
|
||||
be relinked to a different file. Normally, when the path of an immutable file
|
||||
is opened for writing by SFTP, the directory entry is relinked to another
|
||||
@ -256,18 +220,3 @@ See also wiki:SftpFrontend_.
|
||||
|
||||
.. _ticket #1059: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1059
|
||||
.. _ticket #1089: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1089
|
||||
|
||||
Known Issues in the FTP Frontend
|
||||
--------------------------------
|
||||
|
||||
Mutable files are not supported by the FTP frontend (`ticket #680`_).
|
||||
|
||||
Non-ASCII filenames are not supported by FTP (`ticket #682`_).
|
||||
|
||||
The FTP frontend sometimes fails to report errors, for example if an upload
|
||||
fails because it does meet the "servers of happiness" threshold (`ticket
|
||||
#1081`_).
|
||||
|
||||
.. _ticket #680: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/680
|
||||
.. _ticket #682: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/682
|
||||
.. _ticket #1081: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1081
|
||||
|
@ -2157,7 +2157,7 @@ When modifying the file, be careful to update it atomically, otherwise a
|
||||
request may arrive while the file is only halfway written, and the partial
|
||||
file may be incorrectly parsed.
|
||||
|
||||
The blacklist is applied to all access paths (including SFTP, FTP, and CLI
|
||||
The blacklist is applied to all access paths (including SFTP and CLI
|
||||
operations), not just the web-API. The blacklist also applies to directories.
|
||||
If a directory is blacklisted, the gateway will refuse access to both that
|
||||
directory and any child files/directories underneath it, when accessed via
|
||||
|
@ -122,7 +122,7 @@ Who should consider using a Helper?
|
||||
* clients who experience problems with TCP connection fairness: if other
|
||||
programs or machines in the same home are getting less than their fair
|
||||
share of upload bandwidth. If the connection is being shared fairly, then
|
||||
a Tahoe upload that is happening at the same time as a single FTP upload
|
||||
a Tahoe upload that is happening at the same time as a single SFTP upload
|
||||
should get half the bandwidth.
|
||||
* clients who have been given the helper.furl by someone who is running a
|
||||
Helper and is willing to let them use it
|
||||
|
@ -23,7 +23,7 @@ Known Issues in Tahoe-LAFS v1.10.3, released 30-Mar-2016
|
||||
* `Disclosure of file through embedded hyperlinks or JavaScript in that file`_
|
||||
* `Command-line arguments are leaked to other local users`_
|
||||
* `Capabilities may be leaked to web browser phishing filter / "safe browsing" servers`_
|
||||
* `Known issues in the FTP and SFTP frontends`_
|
||||
* `Known issues in the SFTP frontend`_
|
||||
* `Traffic analysis based on sizes of files/directories, storage indices, and timing`_
|
||||
* `Privacy leak via Google Chart API link in map-update timing web page`_
|
||||
|
||||
@ -213,8 +213,8 @@ To disable the filter in Chrome:
|
||||
|
||||
----
|
||||
|
||||
Known issues in the FTP and SFTP frontends
|
||||
------------------------------------------
|
||||
Known issues in the SFTP frontend
|
||||
---------------------------------
|
||||
|
||||
These are documented in :doc:`frontends/FTP-and-SFTP` and on `the
|
||||
SftpFrontend page`_ on the wiki.
|
||||
|
@ -207,10 +207,10 @@ create a new directory and lose the capability to it, then you cannot
|
||||
access that directory ever again.
|
||||
|
||||
|
||||
The SFTP and FTP frontends
|
||||
--------------------------
|
||||
The SFTP frontend
|
||||
-----------------
|
||||
|
||||
You can access your Tahoe-LAFS grid via any SFTP_ or FTP_ client. See
|
||||
You can access your Tahoe-LAFS grid via any SFTP_ client. See
|
||||
:doc:`frontends/FTP-and-SFTP` for how to set this up. On most Unix
|
||||
platforms, you can also use SFTP to plug Tahoe-LAFS into your computer's
|
||||
local filesystem via ``sshfs``, but see the `FAQ about performance
|
||||
@ -220,7 +220,6 @@ The SftpFrontend_ page on the wiki has more information about using SFTP with
|
||||
Tahoe-LAFS.
|
||||
|
||||
.. _SFTP: https://en.wikipedia.org/wiki/SSH_file_transfer_protocol
|
||||
.. _FTP: https://en.wikipedia.org/wiki/File_Transfer_Protocol
|
||||
.. _FAQ about performance problems: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ#Q23_FUSE
|
||||
.. _SftpFrontend: https://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend
|
||||
|
||||
|
0
newsfragments/3384.minor
Normal file
0
newsfragments/3384.minor
Normal file
0
newsfragments/3529.minor
Normal file
0
newsfragments/3529.minor
Normal file
0
newsfragments/3534.minor
Normal file
0
newsfragments/3534.minor
Normal file
0
newsfragments/3566.minor
Normal file
0
newsfragments/3566.minor
Normal file
0
newsfragments/3574.minor
Normal file
0
newsfragments/3574.minor
Normal file
0
newsfragments/3575.minor
Normal file
0
newsfragments/3575.minor
Normal file
0
newsfragments/3576.minor
Normal file
0
newsfragments/3576.minor
Normal file
0
newsfragments/3577.minor
Normal file
0
newsfragments/3577.minor
Normal file
0
newsfragments/3578.minor
Normal file
0
newsfragments/3578.minor
Normal file
0
newsfragments/3582.minor
Normal file
0
newsfragments/3582.minor
Normal file
1
newsfragments/3583.removed
Normal file
1
newsfragments/3583.removed
Normal file
@ -0,0 +1 @@
|
||||
FTP is no longer supported by Tahoe-LAFS. Please use the SFTP support instead.
|
1
newsfragments/3587.minor
Normal file
1
newsfragments/3587.minor
Normal file
@ -0,0 +1 @@
|
||||
|
9
setup.py
9
setup.py
@ -63,12 +63,8 @@ install_requires = [
|
||||
# version of cryptography will *really* be installed.
|
||||
"cryptography >= 2.6",
|
||||
|
||||
# * We need Twisted 10.1.0 for the FTP frontend in order for
|
||||
# Twisted's FTP server to support asynchronous close.
|
||||
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
|
||||
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
|
||||
# * The FTP frontend depends on Twisted >= 11.1.0 for
|
||||
# filepath.Permissions
|
||||
# * The SFTP frontend and manhole depend on the conch extra. However, we
|
||||
# can't explicitly declare that without an undesirable dependency on gmpy,
|
||||
# as explained in ticket #2740.
|
||||
@ -385,10 +381,7 @@ setup(name="tahoe-lafs", # also set in __init__.py
|
||||
# this version from time to time, but we will do it
|
||||
# intentionally.
|
||||
"pyflakes == 2.2.0",
|
||||
# coverage 5.0 breaks the integration tests in some opaque way.
|
||||
# This probably needs to be addressed in a more permanent way
|
||||
# eventually...
|
||||
"coverage ~= 4.5",
|
||||
"coverage ~= 5.0",
|
||||
"mock",
|
||||
"tox",
|
||||
"pytest",
|
||||
|
@ -1,3 +1,14 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import os
|
||||
|
||||
@ -34,10 +45,10 @@ class Blacklist(object):
|
||||
try:
|
||||
if self.last_mtime is None or current_mtime > self.last_mtime:
|
||||
self.entries.clear()
|
||||
with open(self.blacklist_fn, "r") as f:
|
||||
with open(self.blacklist_fn, "rb") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#"):
|
||||
if not line or line.startswith(b"#"):
|
||||
continue
|
||||
si_s, reason = line.split(None, 1)
|
||||
si = base32.a2b(si_s) # must be valid base32
|
||||
|
@ -86,12 +86,6 @@ _client_config = configutil.ValidConfiguration(
|
||||
"shares.total",
|
||||
"storage.plugins",
|
||||
),
|
||||
"ftpd": (
|
||||
"accounts.file",
|
||||
"accounts.url",
|
||||
"enabled",
|
||||
"port",
|
||||
),
|
||||
"storage": (
|
||||
"debug_discard",
|
||||
"enabled",
|
||||
@ -656,7 +650,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
raise ValueError("config error: helper is enabled, but tub "
|
||||
"is not listening ('tub.port=' is empty)")
|
||||
self.init_helper()
|
||||
self.init_ftp_server()
|
||||
self.init_sftp_server()
|
||||
|
||||
# If the node sees an exit_trigger file, it will poll every second to see
|
||||
@ -1032,18 +1025,6 @@ class _Client(node.Node, pollmixin.PollMixin):
|
||||
)
|
||||
ws.setServiceParent(self)
|
||||
|
||||
def init_ftp_server(self):
|
||||
if self.config.get_config("ftpd", "enabled", False, boolean=True):
|
||||
accountfile = self.config.get_config("ftpd", "accounts.file", None)
|
||||
if accountfile:
|
||||
accountfile = self.config.get_config_path(accountfile)
|
||||
accounturl = self.config.get_config("ftpd", "accounts.url", None)
|
||||
ftp_portstr = self.config.get_config("ftpd", "port", "8021")
|
||||
|
||||
from allmydata.frontends import ftpd
|
||||
s = ftpd.FTPServer(self, accountfile, accounturl, ftp_portstr)
|
||||
s.setServiceParent(self)
|
||||
|
||||
def init_sftp_server(self):
|
||||
if self.config.get_config("sftpd", "enabled", False, boolean=True):
|
||||
accountfile = self.config.get_config("sftpd", "accounts.file", None)
|
||||
|
@ -1,4 +1,15 @@
|
||||
"""Implementation of the deep stats class."""
|
||||
"""Implementation of the deep stats class.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import math
|
||||
|
||||
@ -13,7 +24,7 @@ from allmydata.util import mathutil
|
||||
class DeepStats(object):
|
||||
"""Deep stats object.
|
||||
|
||||
Holds results of the deep-stats opetation.
|
||||
Holds results of the deep-stats operation.
|
||||
Used for json generation in the API."""
|
||||
|
||||
# Json API version.
|
||||
@ -121,7 +132,7 @@ class DeepStats(object):
|
||||
h[bucket] += 1
|
||||
|
||||
def get_results(self):
|
||||
"""Returns deep-stats resutls."""
|
||||
"""Returns deep-stats results."""
|
||||
stats = self.stats.copy()
|
||||
for key in self.histograms:
|
||||
h = self.histograms[key]
|
||||
|
@ -18,7 +18,6 @@ import time
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
from foolscap.api import fireEventually
|
||||
import json
|
||||
|
||||
from allmydata.crypto import aes
|
||||
from allmydata.deep_stats import DeepStats
|
||||
@ -31,7 +30,7 @@ from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \
|
||||
from allmydata.check_results import DeepCheckResults, \
|
||||
DeepCheckAndRepairResults
|
||||
from allmydata.monitor import Monitor
|
||||
from allmydata.util import hashutil, base32, log
|
||||
from allmydata.util import hashutil, base32, log, jsonbytes as json
|
||||
from allmydata.util.encodingutil import quote_output, normalize
|
||||
from allmydata.util.assertutil import precondition
|
||||
from allmydata.util.netstring import netstring, split_netstring
|
||||
|
@ -1,338 +0,0 @@
|
||||
from six import ensure_str
|
||||
|
||||
from zope.interface import implementer
|
||||
from twisted.application import service, strports
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.interfaces import IConsumer
|
||||
from twisted.cred import portal
|
||||
from twisted.python import filepath
|
||||
from twisted.protocols import ftp
|
||||
|
||||
from allmydata.interfaces import IDirectoryNode, ExistingChildError, \
|
||||
NoSuchChildError
|
||||
from allmydata.immutable.upload import FileHandle
|
||||
from allmydata.util.fileutil import EncryptedTemporaryFile
|
||||
from allmydata.util.assertutil import precondition
|
||||
|
||||
@implementer(ftp.IReadFile)
|
||||
class ReadFile(object):
|
||||
def __init__(self, node):
|
||||
self.node = node
|
||||
def send(self, consumer):
|
||||
d = self.node.read(consumer)
|
||||
return d # when consumed
|
||||
|
||||
@implementer(IConsumer)
|
||||
class FileWriter(object):
|
||||
|
||||
def registerProducer(self, producer, streaming):
|
||||
if not streaming:
|
||||
raise NotImplementedError("Non-streaming producer not supported.")
|
||||
# we write the data to a temporary file, since Tahoe can't do
|
||||
# streaming upload yet.
|
||||
self.f = EncryptedTemporaryFile()
|
||||
return None
|
||||
|
||||
def unregisterProducer(self):
|
||||
# the upload actually happens in WriteFile.close()
|
||||
pass
|
||||
|
||||
def write(self, data):
|
||||
self.f.write(data)
|
||||
|
||||
@implementer(ftp.IWriteFile)
|
||||
class WriteFile(object):
|
||||
|
||||
def __init__(self, parent, childname, convergence):
|
||||
self.parent = parent
|
||||
self.childname = childname
|
||||
self.convergence = convergence
|
||||
|
||||
def receive(self):
|
||||
self.c = FileWriter()
|
||||
return defer.succeed(self.c)
|
||||
|
||||
def close(self):
|
||||
u = FileHandle(self.c.f, self.convergence)
|
||||
d = self.parent.add_file(self.childname, u)
|
||||
return d
|
||||
|
||||
|
||||
class NoParentError(Exception):
|
||||
pass
|
||||
|
||||
# filepath.Permissions was added in Twisted-11.1.0, which we require. Twisted
|
||||
# <15.0.0 expected an int, and only does '&' on it. Twisted >=15.0.0 expects
|
||||
# a filepath.Permissions. This satisfies both.
|
||||
|
||||
class IntishPermissions(filepath.Permissions):
|
||||
def __init__(self, statModeInt):
|
||||
self._tahoe_statModeInt = statModeInt
|
||||
filepath.Permissions.__init__(self, statModeInt)
|
||||
def __and__(self, other):
|
||||
return self._tahoe_statModeInt & other
|
||||
|
||||
@implementer(ftp.IFTPShell)
|
||||
class Handler(object):
|
||||
def __init__(self, client, rootnode, username, convergence):
|
||||
self.client = client
|
||||
self.root = rootnode
|
||||
self.username = username
|
||||
self.convergence = convergence
|
||||
|
||||
def makeDirectory(self, path):
|
||||
d = self._get_root(path)
|
||||
d.addCallback(lambda root_and_path:
|
||||
self._get_or_create_directories(root_and_path[0], root_and_path[1]))
|
||||
return d
|
||||
|
||||
def _get_or_create_directories(self, node, path):
|
||||
if not IDirectoryNode.providedBy(node):
|
||||
# unfortunately it is too late to provide the name of the
|
||||
# blocking directory in the error message.
|
||||
raise ftp.FileExistsError("cannot create directory because there "
|
||||
"is a file in the way")
|
||||
if not path:
|
||||
return defer.succeed(node)
|
||||
d = node.get(path[0])
|
||||
def _maybe_create(f):
|
||||
f.trap(NoSuchChildError)
|
||||
return node.create_subdirectory(path[0])
|
||||
d.addErrback(_maybe_create)
|
||||
d.addCallback(self._get_or_create_directories, path[1:])
|
||||
return d
|
||||
|
||||
def _get_parent(self, path):
|
||||
# fire with (parentnode, childname)
|
||||
path = [unicode(p) for p in path]
|
||||
if not path:
|
||||
raise NoParentError
|
||||
childname = path[-1]
|
||||
d = self._get_root(path)
|
||||
def _got_root(root_and_path):
|
||||
(root, path) = root_and_path
|
||||
if not path:
|
||||
raise NoParentError
|
||||
return root.get_child_at_path(path[:-1])
|
||||
d.addCallback(_got_root)
|
||||
def _got_parent(parent):
|
||||
return (parent, childname)
|
||||
d.addCallback(_got_parent)
|
||||
return d
|
||||
|
||||
def _remove_thing(self, path, must_be_directory=False, must_be_file=False):
|
||||
d = defer.maybeDeferred(self._get_parent, path)
|
||||
def _convert_error(f):
|
||||
f.trap(NoParentError)
|
||||
raise ftp.PermissionDeniedError("cannot delete root directory")
|
||||
d.addErrback(_convert_error)
|
||||
def _got_parent(parent_and_childname):
|
||||
(parent, childname) = parent_and_childname
|
||||
d = parent.get(childname)
|
||||
def _got_child(child):
|
||||
if must_be_directory and not IDirectoryNode.providedBy(child):
|
||||
raise ftp.IsNotADirectoryError("rmdir called on a file")
|
||||
if must_be_file and IDirectoryNode.providedBy(child):
|
||||
raise ftp.IsADirectoryError("rmfile called on a directory")
|
||||
return parent.delete(childname)
|
||||
d.addCallback(_got_child)
|
||||
d.addErrback(self._convert_error)
|
||||
return d
|
||||
d.addCallback(_got_parent)
|
||||
return d
|
||||
|
||||
def removeDirectory(self, path):
|
||||
return self._remove_thing(path, must_be_directory=True)
|
||||
|
||||
def removeFile(self, path):
|
||||
return self._remove_thing(path, must_be_file=True)
|
||||
|
||||
def rename(self, fromPath, toPath):
|
||||
# the target directory must already exist
|
||||
d = self._get_parent(fromPath)
|
||||
def _got_from_parent(fromparent_and_childname):
|
||||
(fromparent, childname) = fromparent_and_childname
|
||||
d = self._get_parent(toPath)
|
||||
d.addCallback(lambda toparent_and_tochildname:
|
||||
fromparent.move_child_to(childname,
|
||||
toparent_and_tochildname[0], toparent_and_tochildname[1],
|
||||
overwrite=False))
|
||||
return d
|
||||
d.addCallback(_got_from_parent)
|
||||
d.addErrback(self._convert_error)
|
||||
return d
|
||||
|
||||
def access(self, path):
|
||||
# we allow access to everything that exists. We are required to raise
|
||||
# an error for paths that don't exist: FTP clients (at least ncftp)
|
||||
# uses this to decide whether to mkdir or not.
|
||||
d = self._get_node_and_metadata_for_path(path)
|
||||
d.addErrback(self._convert_error)
|
||||
d.addCallback(lambda res: None)
|
||||
return d
|
||||
|
||||
def _convert_error(self, f):
|
||||
if f.check(NoSuchChildError):
|
||||
childname = f.value.args[0].encode("utf-8")
|
||||
msg = "'%s' doesn't exist" % childname
|
||||
raise ftp.FileNotFoundError(msg)
|
||||
if f.check(ExistingChildError):
|
||||
msg = f.value.args[0].encode("utf-8")
|
||||
raise ftp.FileExistsError(msg)
|
||||
return f
|
||||
|
||||
def _get_root(self, path):
|
||||
# return (root, remaining_path)
|
||||
path = [unicode(p) for p in path]
|
||||
if path and path[0] == "uri":
|
||||
d = defer.maybeDeferred(self.client.create_node_from_uri,
|
||||
str(path[1]))
|
||||
d.addCallback(lambda root: (root, path[2:]))
|
||||
else:
|
||||
d = defer.succeed((self.root,path))
|
||||
return d
|
||||
|
||||
def _get_node_and_metadata_for_path(self, path):
|
||||
d = self._get_root(path)
|
||||
def _got_root(root_and_path):
|
||||
(root,path) = root_and_path
|
||||
if path:
|
||||
return root.get_child_and_metadata_at_path(path)
|
||||
else:
|
||||
return (root,{})
|
||||
d.addCallback(_got_root)
|
||||
return d
|
||||
|
||||
def _populate_row(self, keys, childnode_and_metadata):
|
||||
(childnode, metadata) = childnode_and_metadata
|
||||
values = []
|
||||
isdir = bool(IDirectoryNode.providedBy(childnode))
|
||||
for key in keys:
|
||||
if key == "size":
|
||||
if isdir:
|
||||
value = 0
|
||||
else:
|
||||
value = childnode.get_size() or 0
|
||||
elif key == "directory":
|
||||
value = isdir
|
||||
elif key == "permissions":
|
||||
# Twisted-14.0.2 (and earlier) expected an int, and used it
|
||||
# in a rendering function that did (mode & NUMBER).
|
||||
# Twisted-15.0.0 expects a
|
||||
# twisted.python.filepath.Permissions , and calls its
|
||||
# .shorthand() method. This provides both.
|
||||
value = IntishPermissions(0o600)
|
||||
elif key == "hardlinks":
|
||||
value = 1
|
||||
elif key == "modified":
|
||||
# follow sftpd convention (i.e. linkmotime in preference to mtime)
|
||||
if "linkmotime" in metadata.get("tahoe", {}):
|
||||
value = metadata["tahoe"]["linkmotime"]
|
||||
else:
|
||||
value = metadata.get("mtime", 0)
|
||||
elif key == "owner":
|
||||
value = self.username
|
||||
elif key == "group":
|
||||
value = self.username
|
||||
else:
|
||||
value = "??"
|
||||
values.append(value)
|
||||
return values
|
||||
|
||||
def stat(self, path, keys=()):
|
||||
# for files only, I think
|
||||
d = self._get_node_and_metadata_for_path(path)
|
||||
def _render(node_and_metadata):
|
||||
(node, metadata) = node_and_metadata
|
||||
assert not IDirectoryNode.providedBy(node)
|
||||
return self._populate_row(keys, (node,metadata))
|
||||
d.addCallback(_render)
|
||||
d.addErrback(self._convert_error)
|
||||
return d
|
||||
|
||||
def list(self, path, keys=()):
|
||||
# the interface claims that path is a list of unicodes, but in
|
||||
# practice it is not
|
||||
d = self._get_node_and_metadata_for_path(path)
|
||||
def _list(node_and_metadata):
|
||||
(node, metadata) = node_and_metadata
|
||||
if IDirectoryNode.providedBy(node):
|
||||
return node.list()
|
||||
return { path[-1]: (node, metadata) } # need last-edge metadata
|
||||
d.addCallback(_list)
|
||||
def _render(children):
|
||||
results = []
|
||||
for (name, childnode) in children.iteritems():
|
||||
# the interface claims that the result should have a unicode
|
||||
# object as the name, but it fails unless you give it a
|
||||
# bytestring
|
||||
results.append( (name.encode("utf-8"),
|
||||
self._populate_row(keys, childnode) ) )
|
||||
return results
|
||||
d.addCallback(_render)
|
||||
d.addErrback(self._convert_error)
|
||||
return d
|
||||
|
||||
def openForReading(self, path):
|
||||
d = self._get_node_and_metadata_for_path(path)
|
||||
d.addCallback(lambda node_and_metadata: ReadFile(node_and_metadata[0]))
|
||||
d.addErrback(self._convert_error)
|
||||
return d
|
||||
|
||||
def openForWriting(self, path):
|
||||
path = [unicode(p) for p in path]
|
||||
if not path:
|
||||
raise ftp.PermissionDeniedError("cannot STOR to root directory")
|
||||
childname = path[-1]
|
||||
d = self._get_root(path)
|
||||
def _got_root(root_and_path):
|
||||
(root, path) = root_and_path
|
||||
if not path:
|
||||
raise ftp.PermissionDeniedError("cannot STOR to root directory")
|
||||
return root.get_child_at_path(path[:-1])
|
||||
d.addCallback(_got_root)
|
||||
def _got_parent(parent):
|
||||
return WriteFile(parent, childname, self.convergence)
|
||||
d.addCallback(_got_parent)
|
||||
return d
|
||||
|
||||
from allmydata.frontends.auth import AccountURLChecker, AccountFileChecker, NeedRootcapLookupScheme
|
||||
|
||||
|
||||
@implementer(portal.IRealm)
|
||||
class Dispatcher(object):
|
||||
def __init__(self, client):
|
||||
self.client = client
|
||||
|
||||
def requestAvatar(self, avatarID, mind, interface):
|
||||
assert interface == ftp.IFTPShell
|
||||
rootnode = self.client.create_node_from_uri(avatarID.rootcap)
|
||||
convergence = self.client.convergence
|
||||
s = Handler(self.client, rootnode, avatarID.username, convergence)
|
||||
def logout(): pass
|
||||
return (interface, s, None)
|
||||
|
||||
|
||||
class FTPServer(service.MultiService):
|
||||
def __init__(self, client, accountfile, accounturl, ftp_portstr):
|
||||
precondition(isinstance(accountfile, (unicode, type(None))), accountfile)
|
||||
service.MultiService.__init__(self)
|
||||
|
||||
r = Dispatcher(client)
|
||||
p = portal.Portal(r)
|
||||
|
||||
if accountfile:
|
||||
c = AccountFileChecker(self, accountfile)
|
||||
p.registerChecker(c)
|
||||
if accounturl:
|
||||
c = AccountURLChecker(self, accounturl)
|
||||
p.registerChecker(c)
|
||||
if not accountfile and not accounturl:
|
||||
# we could leave this anonymous, with just the /uri/CAP form
|
||||
raise NeedRootcapLookupScheme("must provide some translation")
|
||||
|
||||
f = ftp.FTPFactory(p)
|
||||
# strports requires a native string.
|
||||
ftp_portstr = ensure_str(ftp_portstr)
|
||||
s = strports.service(ftp_portstr, f)
|
||||
s.setServiceParent(self)
|
@ -255,11 +255,11 @@ class Encoder(object):
|
||||
# captures the slot, not the value
|
||||
#d.addCallback(lambda res: self.do_segment(i))
|
||||
# use this form instead:
|
||||
d.addCallback(lambda res, i=i: self._encode_segment(i))
|
||||
d.addCallback(lambda res, i=i: self._encode_segment(i, is_tail=False))
|
||||
d.addCallback(self._send_segment, i)
|
||||
d.addCallback(self._turn_barrier)
|
||||
last_segnum = self.num_segments - 1
|
||||
d.addCallback(lambda res: self._encode_tail_segment(last_segnum))
|
||||
d.addCallback(lambda res: self._encode_segment(last_segnum, is_tail=True))
|
||||
d.addCallback(self._send_segment, last_segnum)
|
||||
d.addCallback(self._turn_barrier)
|
||||
|
||||
@ -317,8 +317,24 @@ class Encoder(object):
|
||||
dl.append(d)
|
||||
return self._gather_responses(dl)
|
||||
|
||||
def _encode_segment(self, segnum):
|
||||
codec = self._codec
|
||||
def _encode_segment(self, segnum, is_tail):
|
||||
"""
|
||||
Encode one segment of input into the configured number of shares.
|
||||
|
||||
:param segnum: Ostensibly, the number of the segment to encode. In
|
||||
reality, this parameter is ignored and the *next* segment is
|
||||
encoded and returned.
|
||||
|
||||
:param bool is_tail: ``True`` if this is the last segment, ``False``
|
||||
otherwise.
|
||||
|
||||
:return: A ``Deferred`` which fires with a two-tuple. The first
|
||||
element is a list of string-y objects representing the encoded
|
||||
segment data for one of the shares. The second element is a list
|
||||
of integers giving the share numbers of the shares in the first
|
||||
element.
|
||||
"""
|
||||
codec = self._tail_codec if is_tail else self._codec
|
||||
start = time.time()
|
||||
|
||||
# the ICodecEncoder API wants to receive a total of self.segment_size
|
||||
@ -350,9 +366,11 @@ class Encoder(object):
|
||||
# footprint to 430KiB at the expense of more hash-tree overhead.
|
||||
|
||||
d = self._gather_data(self.required_shares, input_piece_size,
|
||||
crypttext_segment_hasher)
|
||||
crypttext_segment_hasher, allow_short=is_tail)
|
||||
def _done_gathering(chunks):
|
||||
for c in chunks:
|
||||
# If is_tail then a short trailing chunk will have been padded
|
||||
# by _gather_data
|
||||
assert len(c) == input_piece_size
|
||||
self._crypttext_hashes.append(crypttext_segment_hasher.digest())
|
||||
# during this call, we hit 5*segsize memory
|
||||
@ -365,31 +383,6 @@ class Encoder(object):
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def _encode_tail_segment(self, segnum):
|
||||
|
||||
start = time.time()
|
||||
codec = self._tail_codec
|
||||
input_piece_size = codec.get_block_size()
|
||||
|
||||
crypttext_segment_hasher = hashutil.crypttext_segment_hasher()
|
||||
|
||||
d = self._gather_data(self.required_shares, input_piece_size,
|
||||
crypttext_segment_hasher, allow_short=True)
|
||||
def _done_gathering(chunks):
|
||||
for c in chunks:
|
||||
# a short trailing chunk will have been padded by
|
||||
# _gather_data
|
||||
assert len(c) == input_piece_size
|
||||
self._crypttext_hashes.append(crypttext_segment_hasher.digest())
|
||||
return codec.encode(chunks)
|
||||
d.addCallback(_done_gathering)
|
||||
def _done(res):
|
||||
elapsed = time.time() - start
|
||||
self._times["cumulative_encoding"] += elapsed
|
||||
return res
|
||||
d.addCallback(_done)
|
||||
return d
|
||||
|
||||
def _gather_data(self, num_chunks, input_chunk_size,
|
||||
crypttext_segment_hasher,
|
||||
allow_short=False):
|
||||
|
@ -16,7 +16,7 @@ from six import ensure_text, ensure_str
|
||||
import time
|
||||
from zope.interface import implementer
|
||||
from twisted.application import service
|
||||
from foolscap.api import Referenceable, eventually
|
||||
from foolscap.api import Referenceable
|
||||
from allmydata.interfaces import InsufficientVersionError
|
||||
from allmydata.introducer.interfaces import IIntroducerClient, \
|
||||
RIIntroducerSubscriberClient_v2
|
||||
@ -24,6 +24,9 @@ from allmydata.introducer.common import sign_to_foolscap, unsign_from_foolscap,\
|
||||
get_tubid_string_from_ann
|
||||
from allmydata.util import log, yamlutil, connection_status
|
||||
from allmydata.util.rrefutil import add_version_to_remote_reference
|
||||
from allmydata.util.observer import (
|
||||
ObserverList,
|
||||
)
|
||||
from allmydata.crypto.error import BadSignature
|
||||
from allmydata.util.assertutil import precondition
|
||||
|
||||
@ -62,8 +65,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
self._publisher = None
|
||||
self._since = None
|
||||
|
||||
self._local_subscribers = [] # (servicename,cb,args,kwargs) tuples
|
||||
self._subscribed_service_names = set()
|
||||
self._local_subscribers = {} # {servicename: ObserverList}
|
||||
self._subscriptions = set() # requests we've actually sent
|
||||
|
||||
# _inbound_announcements remembers one announcement per
|
||||
@ -177,21 +179,21 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
return log.msg(*args, **kwargs)
|
||||
|
||||
def subscribe_to(self, service_name, callback, *args, **kwargs):
|
||||
self._local_subscribers.append( (service_name,callback,args,kwargs) )
|
||||
self._subscribed_service_names.add(service_name)
|
||||
obs = self._local_subscribers.setdefault(service_name, ObserverList())
|
||||
obs.subscribe(lambda key_s, ann: callback(key_s, ann, *args, **kwargs))
|
||||
self._maybe_subscribe()
|
||||
for index,(ann,key_s,when) in list(self._inbound_announcements.items()):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
servicename = index[0]
|
||||
if servicename == service_name:
|
||||
eventually(callback, key_s, ann, *args, **kwargs)
|
||||
obs.notify(key_s, ann)
|
||||
|
||||
def _maybe_subscribe(self):
|
||||
if not self._publisher:
|
||||
self.log("want to subscribe, but no introducer yet",
|
||||
level=log.NOISY)
|
||||
return
|
||||
for service_name in self._subscribed_service_names:
|
||||
for service_name in self._local_subscribers:
|
||||
if service_name in self._subscriptions:
|
||||
continue
|
||||
self._subscriptions.add(service_name)
|
||||
@ -270,7 +272,7 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
self._debug_counts["inbound_announcement"] += 1
|
||||
service_name = str(ann["service-name"])
|
||||
if service_name not in self._subscribed_service_names:
|
||||
if service_name not in self._local_subscribers:
|
||||
self.log("announcement for a service we don't care about [%s]"
|
||||
% (service_name,), level=log.UNUSUAL, umid="dIpGNA")
|
||||
self._debug_counts["wrong_service"] += 1
|
||||
@ -341,9 +343,9 @@ class IntroducerClient(service.Service, Referenceable):
|
||||
def _deliver_announcements(self, key_s, ann):
|
||||
precondition(isinstance(key_s, bytes), key_s)
|
||||
service_name = str(ann["service-name"])
|
||||
for (service_name2,cb,args,kwargs) in self._local_subscribers:
|
||||
if service_name2 == service_name:
|
||||
eventually(cb, key_s, ann, *args, **kwargs)
|
||||
obs = self._local_subscribers.get(service_name)
|
||||
if obs is not None:
|
||||
obs.notify(key_s, ann)
|
||||
|
||||
def connection_status(self):
|
||||
assert self.running # startService builds _introducer_reconnector
|
||||
|
@ -679,8 +679,8 @@ def create_connection_handlers(config, i2p_provider, tor_provider):
|
||||
# create that handler, so hints which want it will be ignored.
|
||||
handlers = {
|
||||
"tcp": _make_tcp_handler(),
|
||||
"tor": tor_provider.get_tor_handler(),
|
||||
"i2p": i2p_provider.get_i2p_handler(),
|
||||
"tor": tor_provider.get_client_endpoint(),
|
||||
"i2p": i2p_provider.get_client_endpoint(),
|
||||
}
|
||||
log.msg(
|
||||
format="built Foolscap connection handlers for: %(known_handlers)s",
|
||||
|
@ -5,6 +5,8 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
from future.utils import bchr
|
||||
|
||||
# do not import any allmydata modules at this level. Do that from inside
|
||||
# individual functions instead.
|
||||
import struct, time, os, sys
|
||||
@ -910,7 +912,7 @@ def corrupt_share(options):
|
||||
f = open(fn, "rb+")
|
||||
f.seek(offset)
|
||||
d = f.read(1)
|
||||
d = chr(ord(d) ^ 0x01)
|
||||
d = bchr(ord(d) ^ 0x01)
|
||||
f.seek(offset)
|
||||
f.write(d)
|
||||
f.close()
|
||||
@ -925,7 +927,7 @@ def corrupt_share(options):
|
||||
f.seek(m.DATA_OFFSET)
|
||||
data = f.read(2000)
|
||||
# make sure this slot contains an SMDF share
|
||||
assert data[0] == b"\x00", "non-SDMF mutable shares not supported"
|
||||
assert data[0:1] == b"\x00", "non-SDMF mutable shares not supported"
|
||||
f.close()
|
||||
|
||||
(version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
|
||||
|
@ -1,11 +1,16 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import time
|
||||
|
||||
# Python 2 compatibility
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import str # noqa: F401
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import time
|
||||
|
||||
from twisted.application import service
|
||||
from twisted.application.internet import TimerService
|
||||
|
@ -11,7 +11,7 @@ __all__ = [
|
||||
"skipIf",
|
||||
]
|
||||
|
||||
from past.builtins import chr as byteschr
|
||||
from past.builtins import chr as byteschr, unicode
|
||||
|
||||
import os, random, struct
|
||||
import six
|
||||
@ -825,13 +825,18 @@ class WebErrorMixin(object):
|
||||
code=None, substring=None, response_substring=None,
|
||||
callable=None, *args, **kwargs):
|
||||
# returns a Deferred with the response body
|
||||
assert substring is None or isinstance(substring, str)
|
||||
if isinstance(substring, bytes):
|
||||
substring = unicode(substring, "ascii")
|
||||
if isinstance(response_substring, unicode):
|
||||
response_substring = response_substring.encode("ascii")
|
||||
assert substring is None or isinstance(substring, unicode)
|
||||
assert response_substring is None or isinstance(response_substring, bytes)
|
||||
assert callable
|
||||
def _validate(f):
|
||||
if code is not None:
|
||||
self.failUnlessEqual(f.value.status, str(code), which)
|
||||
self.failUnlessEqual(f.value.status, b"%d" % code, which)
|
||||
if substring:
|
||||
code_string = str(f)
|
||||
code_string = unicode(f)
|
||||
self.failUnless(substring in code_string,
|
||||
"%s: substring '%s' not in '%s'"
|
||||
% (which, substring, code_string))
|
||||
|
@ -1,7 +1,8 @@
|
||||
from __future__ import print_function
|
||||
|
||||
from future.utils import PY2, native_str
|
||||
from future.utils import PY2, native_str, bchr, binary_type
|
||||
from future.builtins import str as future_str
|
||||
from past.builtins import unicode
|
||||
|
||||
import os
|
||||
import time
|
||||
@ -20,9 +21,6 @@ from twisted.trial import unittest
|
||||
from ..util.assertutil import precondition
|
||||
from ..scripts import runner
|
||||
from allmydata.util.encodingutil import unicode_platform, get_filesystem_encoding, get_io_encoding
|
||||
# Imported for backwards compatibility:
|
||||
from future.utils import bord, bchr, binary_type
|
||||
from past.builtins import unicode
|
||||
|
||||
|
||||
def skip_if_cannot_represent_filename(u):
|
||||
@ -183,13 +181,12 @@ def insecurerandstr(n):
|
||||
return b''.join(map(bchr, map(randrange, [0]*n, [256]*n)))
|
||||
|
||||
def flip_bit(good, which):
|
||||
# TODO Probs need to update with bchr/bord as with flip_one_bit, below.
|
||||
# flip the low-order bit of good[which]
|
||||
"""Flip the low-order bit of good[which]."""
|
||||
if which == -1:
|
||||
pieces = good[:which], good[-1:], ""
|
||||
pieces = good[:which], good[-1:], b""
|
||||
else:
|
||||
pieces = good[:which], good[which:which+1], good[which+1:]
|
||||
return pieces[0] + chr(ord(pieces[1]) ^ 0x01) + pieces[2]
|
||||
return pieces[0] + bchr(ord(pieces[1]) ^ 0x01) + pieces[2]
|
||||
|
||||
def flip_one_bit(s, offset=0, size=None):
|
||||
""" flip one random bit of the string s, in a byte greater than or equal to offset and less
|
||||
@ -198,7 +195,7 @@ def flip_one_bit(s, offset=0, size=None):
|
||||
if size is None:
|
||||
size=len(s)-offset
|
||||
i = randrange(offset, offset+size)
|
||||
result = s[:i] + bchr(bord(s[i])^(0x01<<randrange(0, 8))) + s[i+1:]
|
||||
result = s[:i] + bchr(ord(s[i:i+1])^(0x01<<randrange(0, 8))) + s[i+1:]
|
||||
assert result != s, "Internal error -- flip_one_bit() produced the same string as its input: %s == %s" % (result, s)
|
||||
return result
|
||||
|
||||
|
@ -24,6 +24,7 @@ from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
from past.builtins import unicode
|
||||
from six import ensure_text
|
||||
|
||||
import os
|
||||
from base64 import b32encode
|
||||
@ -617,8 +618,7 @@ class GridTestMixin(object):
|
||||
method="GET", clientnum=0, **kwargs):
|
||||
# if return_response=True, this fires with (data, statuscode,
|
||||
# respheaders) instead of just data.
|
||||
assert not isinstance(urlpath, unicode)
|
||||
url = self.client_baseurls[clientnum] + urlpath
|
||||
url = self.client_baseurls[clientnum] + ensure_text(urlpath)
|
||||
|
||||
response = yield treq.request(method, url, persistent=False,
|
||||
allow_redirects=followRedirect,
|
||||
|
@ -173,7 +173,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
return c
|
||||
|
||||
def render_json(self, resource):
|
||||
return self.successResultOf(render(resource, {"output": ["json"]}))
|
||||
return self.successResultOf(render(resource, {b"output": [b"json"]}))
|
||||
|
||||
def render_element(self, element, args=None):
|
||||
if args is None:
|
||||
@ -186,7 +186,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
html = self.render_element(lcr)
|
||||
self.failUnlessIn(b"Literal files are always healthy", html)
|
||||
|
||||
html = self.render_element(lcr, args={"return_to": ["FOOURL"]})
|
||||
html = self.render_element(lcr, args={b"return_to": [b"FOOURL"]})
|
||||
self.failUnlessIn(b"Literal files are always healthy", html)
|
||||
self.failUnlessIn(b'<a href="FOOURL">Return to file.</a>', html)
|
||||
|
||||
@ -269,7 +269,7 @@ class WebResultsRendering(unittest.TestCase):
|
||||
self.failUnlessIn("File Check Results for SI=2k6avp", s) # abbreviated
|
||||
self.failUnlessIn("Not Recoverable! : rather dead", s)
|
||||
|
||||
html = self.render_element(w, args={"return_to": ["FOOURL"]})
|
||||
html = self.render_element(w, args={b"return_to": [b"FOOURL"]})
|
||||
self.failUnlessIn(b'<a href="FOOURL">Return to file/directory.</a>',
|
||||
html)
|
||||
|
||||
|
@ -51,7 +51,6 @@ from allmydata.nodemaker import (
|
||||
NodeMaker,
|
||||
)
|
||||
from allmydata.node import OldConfigError, UnescapedHashError, create_node_dir
|
||||
from allmydata.frontends.auth import NeedRootcapLookupScheme
|
||||
from allmydata import client
|
||||
from allmydata.storage_client import (
|
||||
StorageClientConfig,
|
||||
@ -424,88 +423,8 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
|
||||
expected = fileutil.abspath_expanduser_unicode(u"relative", abs_basedir)
|
||||
self.failUnlessReallyEqual(w.staticdir, expected)
|
||||
|
||||
# TODO: also test config options for SFTP.
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_ftp_create(self):
|
||||
"""
|
||||
configuration for sftpd results in it being started
|
||||
"""
|
||||
root = FilePath(self.mktemp())
|
||||
root.makedirs()
|
||||
accounts = root.child(b"sftp-accounts")
|
||||
accounts.touch()
|
||||
|
||||
data = FilePath(__file__).sibling(b"data")
|
||||
privkey = data.child(b"openssh-rsa-2048.txt")
|
||||
pubkey = data.child(b"openssh-rsa-2048.pub.txt")
|
||||
|
||||
basedir = u"client.Basic.test_ftp_create"
|
||||
create_node_dir(basedir, "testing")
|
||||
with open(os.path.join(basedir, "tahoe.cfg"), "w") as f:
|
||||
f.write((
|
||||
'[sftpd]\n'
|
||||
'enabled = true\n'
|
||||
'accounts.file = {}\n'
|
||||
'host_pubkey_file = {}\n'
|
||||
'host_privkey_file = {}\n'
|
||||
).format(accounts.path, pubkey.path, privkey.path))
|
||||
|
||||
client_node = yield client.create_client(
|
||||
basedir,
|
||||
)
|
||||
sftp = client_node.getServiceNamed("frontend:sftp")
|
||||
self.assertIs(sftp.parent, client_node)
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_ftp_auth_keyfile(self):
|
||||
"""
|
||||
ftpd accounts.file is parsed properly
|
||||
"""
|
||||
basedir = u"client.Basic.test_ftp_auth_keyfile"
|
||||
os.mkdir(basedir)
|
||||
fileutil.write(os.path.join(basedir, "tahoe.cfg"),
|
||||
(BASECONFIG +
|
||||
"[ftpd]\n"
|
||||
"enabled = true\n"
|
||||
"port = tcp:0:interface=127.0.0.1\n"
|
||||
"accounts.file = private/accounts\n"))
|
||||
os.mkdir(os.path.join(basedir, "private"))
|
||||
fileutil.write(os.path.join(basedir, "private", "accounts"), "\n")
|
||||
c = yield client.create_client(basedir) # just make sure it can be instantiated
|
||||
del c
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_ftp_auth_url(self):
|
||||
"""
|
||||
ftpd accounts.url is parsed properly
|
||||
"""
|
||||
basedir = u"client.Basic.test_ftp_auth_url"
|
||||
os.mkdir(basedir)
|
||||
fileutil.write(os.path.join(basedir, "tahoe.cfg"),
|
||||
(BASECONFIG +
|
||||
"[ftpd]\n"
|
||||
"enabled = true\n"
|
||||
"port = tcp:0:interface=127.0.0.1\n"
|
||||
"accounts.url = http://0.0.0.0/\n"))
|
||||
c = yield client.create_client(basedir) # just make sure it can be instantiated
|
||||
del c
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_ftp_auth_no_accountfile_or_url(self):
|
||||
"""
|
||||
ftpd requires some way to look up accounts
|
||||
"""
|
||||
basedir = u"client.Basic.test_ftp_auth_no_accountfile_or_url"
|
||||
os.mkdir(basedir)
|
||||
fileutil.write(os.path.join(basedir, "tahoe.cfg"),
|
||||
(BASECONFIG +
|
||||
"[ftpd]\n"
|
||||
"enabled = true\n"
|
||||
"port = tcp:0:interface=127.0.0.1\n"))
|
||||
with self.assertRaises(NeedRootcapLookupScheme):
|
||||
yield client.create_client(basedir)
|
||||
# TODO: also test config options for SFTP. See Git history for deleted FTP
|
||||
# tests that could be used as basis for these tests.
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _storage_dir_test(self, basedir, storage_path, expected_path):
|
||||
|
@ -1,149 +1,69 @@
|
||||
import os
|
||||
import mock
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.internet import reactor, endpoints, defer
|
||||
from twisted.internet.interfaces import IStreamClientEndpoint
|
||||
from twisted.internet import reactor
|
||||
|
||||
from foolscap.connections import tcp
|
||||
|
||||
from testtools.matchers import (
|
||||
MatchesDict,
|
||||
IsInstance,
|
||||
Equals,
|
||||
)
|
||||
|
||||
from ..node import PrivacyError, config_from_string
|
||||
from ..node import create_connection_handlers
|
||||
from ..node import create_main_tub
|
||||
from ..util.i2p_provider import create as create_i2p_provider
|
||||
from ..util.tor_provider import create as create_tor_provider
|
||||
|
||||
from .common import (
|
||||
SyncTestCase,
|
||||
ConstantAddresses,
|
||||
)
|
||||
|
||||
|
||||
BASECONFIG = ""
|
||||
|
||||
|
||||
class TCP(unittest.TestCase):
|
||||
|
||||
def test_default(self):
|
||||
class CreateConnectionHandlersTests(SyncTestCase):
|
||||
"""
|
||||
Tests for the Foolscap connection handlers return by
|
||||
``create_connection_handlers``.
|
||||
"""
|
||||
def test_foolscap_handlers(self):
|
||||
"""
|
||||
``create_connection_handlers`` returns a Foolscap connection handlers
|
||||
dictionary mapping ``"tcp"`` to
|
||||
``foolscap.connections.tcp.DefaultTCP``, ``"tor"`` to the supplied Tor
|
||||
provider's handler, and ``"i2p"`` to the supplied I2P provider's
|
||||
handler.
|
||||
"""
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG,
|
||||
)
|
||||
_, foolscap_handlers = create_connection_handlers(config, mock.Mock(), mock.Mock())
|
||||
self.assertIsInstance(
|
||||
foolscap_handlers['tcp'],
|
||||
tcp.DefaultTCP,
|
||||
tor_endpoint = object()
|
||||
tor = ConstantAddresses(handler=tor_endpoint)
|
||||
i2p_endpoint = object()
|
||||
i2p = ConstantAddresses(handler=i2p_endpoint)
|
||||
_, foolscap_handlers = create_connection_handlers(
|
||||
config,
|
||||
i2p,
|
||||
tor,
|
||||
)
|
||||
self.assertThat(
|
||||
foolscap_handlers,
|
||||
MatchesDict({
|
||||
"tcp": IsInstance(tcp.DefaultTCP),
|
||||
"i2p": Equals(i2p_endpoint),
|
||||
"tor": Equals(tor_endpoint),
|
||||
}),
|
||||
)
|
||||
|
||||
|
||||
class Tor(unittest.TestCase):
|
||||
|
||||
def test_disabled(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\nenabled = false\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_unimportable(self):
|
||||
with mock.patch("allmydata.util.tor_provider._import_tor",
|
||||
return_value=None):
|
||||
config = config_from_string("fake.port", "no-basedir", BASECONFIG)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_default(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.default_socks",
|
||||
return_value=h1) as f:
|
||||
|
||||
config = config_from_string("fake.port", "no-basedir", BASECONFIG)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(f.mock_calls, [mock.call()])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def _do_test_launch(self, executable):
|
||||
# the handler is created right away
|
||||
config = BASECONFIG+"[tor]\nlaunch = true\n"
|
||||
if executable:
|
||||
config += "tor.executable = %s\n" % executable
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.control_endpoint_maker",
|
||||
return_value=h1) as f:
|
||||
|
||||
config = config_from_string("fake.port", ".", config)
|
||||
tp = create_tor_provider("reactor", config)
|
||||
h = tp.get_tor_handler()
|
||||
|
||||
private_dir = config.get_config_path("private")
|
||||
exp = mock.call(tp._make_control_endpoint,
|
||||
takes_status=True)
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
# later, when Foolscap first connects, Tor should be launched
|
||||
reactor = "reactor"
|
||||
tcp = object()
|
||||
tcep = object()
|
||||
launch_tor = mock.Mock(return_value=defer.succeed(("ep_desc", tcp)))
|
||||
cfs = mock.Mock(return_value=tcep)
|
||||
with mock.patch("allmydata.util.tor_provider._launch_tor", launch_tor):
|
||||
with mock.patch("allmydata.util.tor_provider.clientFromString", cfs):
|
||||
d = tp._make_control_endpoint(reactor,
|
||||
update_status=lambda status: None)
|
||||
cep = self.successResultOf(d)
|
||||
launch_tor.assert_called_with(reactor, executable,
|
||||
os.path.abspath(private_dir),
|
||||
tp._txtorcon)
|
||||
cfs.assert_called_with(reactor, "ep_desc")
|
||||
self.assertIs(cep, tcep)
|
||||
|
||||
def test_launch(self):
|
||||
self._do_test_launch(None)
|
||||
|
||||
def test_launch_executable(self):
|
||||
self._do_test_launch("/special/tor")
|
||||
|
||||
def test_socksport_unix_endpoint(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.socks_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\nsocks.port = unix:/var/lib/fw-daemon/tor_socks.socket\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertTrue(IStreamClientEndpoint.providedBy(f.mock_calls[0][1][0]))
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_socksport_endpoint(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.socks_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\nsocks.port = tcp:127.0.0.1:1234\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertTrue(IStreamClientEndpoint.providedBy(f.mock_calls[0][1][0]))
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_socksport_endpoint_otherhost(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.socks_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"no-basedir",
|
||||
"fake.port",
|
||||
BASECONFIG + "[tor]\nsocks.port = tcp:otherhost:1234\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertTrue(IStreamClientEndpoint.providedBy(f.mock_calls[0][1][0]))
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_socksport_bad_endpoint(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
@ -176,73 +96,8 @@ class Tor(unittest.TestCase):
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_controlport(self):
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.tor.control_endpoint",
|
||||
return_value=h1) as f:
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[tor]\ncontrol.port = tcp:localhost:1234\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(reactor, config)
|
||||
h = tor_provider.get_tor_handler()
|
||||
self.assertEqual(len(f.mock_calls), 1)
|
||||
ep = f.mock_calls[0][1][0]
|
||||
self.assertIsInstance(ep, endpoints.TCP4ClientEndpoint)
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
class I2P(unittest.TestCase):
|
||||
|
||||
def test_disabled(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nenabled = false\n",
|
||||
)
|
||||
i2p_provider = create_i2p_provider(None, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_unimportable(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG,
|
||||
)
|
||||
with mock.patch("allmydata.util.i2p_provider._import_i2p",
|
||||
return_value=None):
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
self.assertEqual(h, None)
|
||||
|
||||
def test_default(self):
|
||||
config = config_from_string("fake.port", "no-basedir", BASECONFIG)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.default",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
self.assertEqual(f.mock_calls, [mock.call(reactor, keyfile=None)])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_samport(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nsam.port = tcp:localhost:1234\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.sam_endpoint",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
|
||||
self.assertEqual(len(f.mock_calls), 1)
|
||||
ep = f.mock_calls[0][1][0]
|
||||
self.assertIsInstance(ep, endpoints.TCP4ClientEndpoint)
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_samport_and_launch(self):
|
||||
config = config_from_string(
|
||||
"no-basedir",
|
||||
@ -258,82 +113,6 @@ class I2P(unittest.TestCase):
|
||||
str(ctx.exception)
|
||||
)
|
||||
|
||||
def test_launch(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir=None, i2p_binary=None)
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_launch_executable(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n" + "i2p.executable = i2p\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir=None, i2p_binary="i2p")
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_launch_configdir(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n" + "i2p.configdir = cfg\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir="cfg", i2p_binary=None)
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_launch_configdir_and_executable(self):
|
||||
config = config_from_string(
|
||||
"no-basedir",
|
||||
"fake.port",
|
||||
BASECONFIG + "[i2p]\nlaunch = true\n" +
|
||||
"i2p.executable = i2p\n" + "i2p.configdir = cfg\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.launch",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(reactor, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
exp = mock.call(i2p_configdir="cfg", i2p_binary="i2p")
|
||||
self.assertEqual(f.mock_calls, [exp])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
def test_configdir(self):
|
||||
config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[i2p]\ni2p.configdir = cfg\n",
|
||||
)
|
||||
h1 = mock.Mock()
|
||||
with mock.patch("foolscap.connections.i2p.local_i2p",
|
||||
return_value=h1) as f:
|
||||
i2p_provider = create_i2p_provider(None, config)
|
||||
h = i2p_provider.get_i2p_handler()
|
||||
|
||||
self.assertEqual(f.mock_calls, [mock.call("cfg")])
|
||||
self.assertIdentical(h, h1)
|
||||
|
||||
class Connections(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
@ -341,7 +120,11 @@ class Connections(unittest.TestCase):
|
||||
self.config = config_from_string("fake.port", self.basedir, BASECONFIG)
|
||||
|
||||
def test_default(self):
|
||||
default_connection_handlers, _ = create_connection_handlers(self.config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
self.config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertEqual(default_connection_handlers["tcp"], "tcp")
|
||||
self.assertEqual(default_connection_handlers["tor"], "tor")
|
||||
self.assertEqual(default_connection_handlers["i2p"], "i2p")
|
||||
@ -352,23 +135,39 @@ class Connections(unittest.TestCase):
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = tor\n",
|
||||
)
|
||||
default_connection_handlers, _ = create_connection_handlers(config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
|
||||
self.assertEqual(default_connection_handlers["tcp"], "tor")
|
||||
self.assertEqual(default_connection_handlers["tor"], "tor")
|
||||
self.assertEqual(default_connection_handlers["i2p"], "i2p")
|
||||
|
||||
def test_tor_unimportable(self):
|
||||
with mock.patch("allmydata.util.tor_provider._import_tor",
|
||||
return_value=None):
|
||||
self.config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = tor\n",
|
||||
"""
|
||||
If the configuration calls for substituting Tor for TCP and
|
||||
``foolscap.connections.tor`` is not importable then
|
||||
``create_connection_handlers`` raises ``ValueError`` with a message
|
||||
explaining this makes Tor unusable.
|
||||
"""
|
||||
self.config = config_from_string(
|
||||
"fake.port",
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = tor\n",
|
||||
)
|
||||
tor_provider = create_tor_provider(
|
||||
reactor,
|
||||
self.config,
|
||||
import_tor=lambda: None,
|
||||
)
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
self.config,
|
||||
i2p_provider=ConstantAddresses(handler=object()),
|
||||
tor_provider=tor_provider,
|
||||
)
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
tor_provider = create_tor_provider(reactor, self.config)
|
||||
default_connection_handlers, _ = create_connection_handlers(self.config, mock.Mock(), tor_provider)
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"'tahoe.cfg [connections] tcp='"
|
||||
@ -383,7 +182,11 @@ class Connections(unittest.TestCase):
|
||||
BASECONFIG + "[connections]\ntcp = unknown\n",
|
||||
)
|
||||
with self.assertRaises(ValueError) as ctx:
|
||||
create_connection_handlers(config, mock.Mock(), mock.Mock())
|
||||
create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertIn("'tahoe.cfg [connections] tcp='", str(ctx.exception))
|
||||
self.assertIn("uses unknown handler type 'unknown'", str(ctx.exception))
|
||||
|
||||
@ -393,7 +196,11 @@ class Connections(unittest.TestCase):
|
||||
"no-basedir",
|
||||
BASECONFIG + "[connections]\ntcp = disabled\n",
|
||||
)
|
||||
default_connection_handlers, _ = create_connection_handlers(config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertEqual(default_connection_handlers["tcp"], None)
|
||||
self.assertEqual(default_connection_handlers["tor"], "tor")
|
||||
self.assertEqual(default_connection_handlers["i2p"], "i2p")
|
||||
@ -408,7 +215,11 @@ class Privacy(unittest.TestCase):
|
||||
)
|
||||
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
create_connection_handlers(config, mock.Mock(), mock.Mock())
|
||||
create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
@ -423,7 +234,11 @@ class Privacy(unittest.TestCase):
|
||||
BASECONFIG + "[connections]\ntcp = disabled\n" +
|
||||
"[node]\nreveal-IP-address = false\n",
|
||||
)
|
||||
default_connection_handlers, _ = create_connection_handlers(config, mock.Mock(), mock.Mock())
|
||||
default_connection_handlers, _ = create_connection_handlers(
|
||||
config,
|
||||
ConstantAddresses(handler=object()),
|
||||
ConstantAddresses(handler=object()),
|
||||
)
|
||||
self.assertEqual(default_connection_handlers["tcp"], None)
|
||||
|
||||
def test_tub_location_auto(self):
|
||||
@ -434,7 +249,14 @@ class Privacy(unittest.TestCase):
|
||||
)
|
||||
|
||||
with self.assertRaises(PrivacyError) as ctx:
|
||||
create_main_tub(config, {}, {}, {}, mock.Mock(), mock.Mock())
|
||||
create_main_tub(
|
||||
config,
|
||||
tub_options={},
|
||||
default_connection_handlers={},
|
||||
foolscap_connection_handlers={},
|
||||
i2p_provider=ConstantAddresses(),
|
||||
tor_provider=ConstantAddresses(),
|
||||
)
|
||||
self.assertEqual(
|
||||
str(ctx.exception),
|
||||
"tub.location uses AUTO",
|
||||
|
@ -1,106 +0,0 @@
|
||||
|
||||
from twisted.trial import unittest
|
||||
|
||||
from allmydata.frontends import ftpd
|
||||
from allmydata.immutable import upload
|
||||
from allmydata.mutable import publish
|
||||
from allmydata.test.no_network import GridTestMixin
|
||||
from allmydata.test.common_util import ReallyEqualMixin
|
||||
|
||||
class Handler(GridTestMixin, ReallyEqualMixin, unittest.TestCase):
|
||||
"""
|
||||
This is a no-network unit test of ftpd.Handler and the abstractions
|
||||
it uses.
|
||||
"""
|
||||
|
||||
FALL_OF_BERLIN_WALL = 626644800
|
||||
TURN_OF_MILLENIUM = 946684800
|
||||
|
||||
def _set_up(self, basedir, num_clients=1, num_servers=10):
|
||||
self.basedir = "ftp/" + basedir
|
||||
self.set_up_grid(num_clients=num_clients, num_servers=num_servers,
|
||||
oneshare=True)
|
||||
|
||||
self.client = self.g.clients[0]
|
||||
self.username = "alice"
|
||||
self.convergence = ""
|
||||
|
||||
d = self.client.create_dirnode()
|
||||
def _created_root(node):
|
||||
self.root = node
|
||||
self.root_uri = node.get_uri()
|
||||
self.handler = ftpd.Handler(self.client, self.root, self.username,
|
||||
self.convergence)
|
||||
d.addCallback(_created_root)
|
||||
return d
|
||||
|
||||
def _set_metadata(self, name, metadata):
|
||||
"""Set metadata for `name', avoiding MetadataSetter's timestamp reset
|
||||
behavior."""
|
||||
def _modifier(old_contents, servermap, first_time):
|
||||
children = self.root._unpack_contents(old_contents)
|
||||
children[name] = (children[name][0], metadata)
|
||||
return self.root._pack_contents(children)
|
||||
|
||||
return self.root._node.modify(_modifier)
|
||||
|
||||
def _set_up_tree(self):
|
||||
# add immutable file at root
|
||||
immutable = upload.Data("immutable file contents", None)
|
||||
d = self.root.add_file(u"immutable", immutable)
|
||||
|
||||
# `mtime' and `linkmotime' both set
|
||||
md_both = {'mtime': self.FALL_OF_BERLIN_WALL,
|
||||
'tahoe': {'linkmotime': self.TURN_OF_MILLENIUM}}
|
||||
d.addCallback(lambda _: self._set_metadata(u"immutable", md_both))
|
||||
|
||||
# add link to root from root
|
||||
d.addCallback(lambda _: self.root.set_node(u"loop", self.root))
|
||||
|
||||
# `mtime' set, but no `linkmotime'
|
||||
md_just_mtime = {'mtime': self.FALL_OF_BERLIN_WALL, 'tahoe': {}}
|
||||
d.addCallback(lambda _: self._set_metadata(u"loop", md_just_mtime))
|
||||
|
||||
# add mutable file at root
|
||||
mutable = publish.MutableData("mutable file contents")
|
||||
d.addCallback(lambda _: self.client.create_mutable_file(mutable))
|
||||
d.addCallback(lambda node: self.root.set_node(u"mutable", node))
|
||||
|
||||
# neither `mtime' nor `linkmotime' set
|
||||
d.addCallback(lambda _: self._set_metadata(u"mutable", {}))
|
||||
|
||||
return d
|
||||
|
||||
def _compareDirLists(self, actual, expected):
|
||||
actual_list = sorted(actual)
|
||||
expected_list = sorted(expected)
|
||||
|
||||
self.failUnlessReallyEqual(len(actual_list), len(expected_list),
|
||||
"%r is wrong length, expecting %r" % (
|
||||
actual_list, expected_list))
|
||||
for (a, b) in zip(actual_list, expected_list):
|
||||
(name, meta) = a
|
||||
(expected_name, expected_meta) = b
|
||||
self.failUnlessReallyEqual(name, expected_name)
|
||||
self.failUnlessReallyEqual(meta, expected_meta)
|
||||
|
||||
def test_list(self):
|
||||
keys = ("size", "directory", "permissions", "hardlinks", "modified",
|
||||
"owner", "group", "unexpected")
|
||||
d = self._set_up("list")
|
||||
|
||||
d.addCallback(lambda _: self._set_up_tree())
|
||||
d.addCallback(lambda _: self.handler.list("", keys=keys))
|
||||
|
||||
expected_root = [
|
||||
('loop',
|
||||
[0, True, ftpd.IntishPermissions(0o600), 1, self.FALL_OF_BERLIN_WALL, 'alice', 'alice', '??']),
|
||||
('immutable',
|
||||
[23, False, ftpd.IntishPermissions(0o600), 1, self.TURN_OF_MILLENIUM, 'alice', 'alice', '??']),
|
||||
('mutable',
|
||||
# timestamp should be 0 if no timestamp metadata is present
|
||||
[0, False, ftpd.IntishPermissions(0o600), 1, 0, 'alice', 'alice', '??'])]
|
||||
|
||||
d.addCallback(lambda root: self._compareDirLists(root, expected_root))
|
||||
|
||||
return d
|
@ -102,9 +102,35 @@ class HashUtilTests(unittest.TestCase):
|
||||
got_a = base32.b2a(got)
|
||||
self.failUnlessEqual(got_a, expected_a)
|
||||
|
||||
def test_known_answers(self):
|
||||
# assert backwards compatibility
|
||||
def test_storage_index_hash_known_answers(self):
|
||||
"""
|
||||
Verify backwards compatibility by comparing ``storage_index_hash`` outputs
|
||||
for some well-known (to us) inputs.
|
||||
"""
|
||||
# This is a marginal case. b"" is not a valid aes 128 key. The
|
||||
# implementation does nothing to avoid producing a result for it,
|
||||
# though.
|
||||
self._testknown(hashutil.storage_index_hash, b"qb5igbhcc5esa6lwqorsy7e6am", b"")
|
||||
|
||||
# This is a little bit more realistic though clearly this is a poor key choice.
|
||||
self._testknown(hashutil.storage_index_hash, b"wvggbrnrezdpa5yayrgiw5nzja", b"x" * 16)
|
||||
|
||||
# Here's a much more realistic key that I generated by reading some
|
||||
# bytes from /dev/urandom. I computed the expected hash value twice.
|
||||
# First using hashlib.sha256 and then with sha256sum(1). The input
|
||||
# string given to the hash function was "43:<storage index tag>,<key>"
|
||||
# in each case.
|
||||
self._testknown(
|
||||
hashutil.storage_index_hash,
|
||||
b"aarbseqqrpsfowduchcjbonscq",
|
||||
base32.a2b(b"2ckv3dfzh6rgjis6ogfqhyxnzy"),
|
||||
)
|
||||
|
||||
def test_known_answers(self):
|
||||
"""
|
||||
Verify backwards compatibility by comparing hash outputs for some
|
||||
well-known (to us) inputs.
|
||||
"""
|
||||
self._testknown(hashutil.block_hash, b"msjr5bh4evuh7fa3zw7uovixfbvlnstr5b65mrerwfnvjxig2jvq", b"")
|
||||
self._testknown(hashutil.uri_extension_hash, b"wthsu45q7zewac2mnivoaa4ulh5xvbzdmsbuyztq2a5fzxdrnkka", b"")
|
||||
self._testknown(hashutil.plaintext_hash, b"5lz5hwz3qj3af7n6e3arblw7xzutvnd3p3fjsngqjcb7utf3x3da", b"")
|
||||
|
@ -277,6 +277,20 @@ class Provider(unittest.TestCase):
|
||||
i2p.local_i2p.assert_called_with("configdir")
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_launch_executable(self):
|
||||
i2p = mock.Mock()
|
||||
handler = object()
|
||||
i2p.launch = mock.Mock(return_value=handler)
|
||||
reactor = object()
|
||||
|
||||
with mock_i2p(i2p):
|
||||
p = i2p_provider.create(reactor,
|
||||
FakeConfig(launch=True,
|
||||
**{"i2p.executable": "myi2p"}))
|
||||
h = p.get_i2p_handler()
|
||||
self.assertIs(h, handler)
|
||||
i2p.launch.assert_called_with(i2p_configdir=None, i2p_binary="myi2p")
|
||||
|
||||
def test_handler_default(self):
|
||||
i2p = mock.Mock()
|
||||
handler = object()
|
||||
|
@ -15,7 +15,12 @@ from six import ensure_binary, ensure_text
|
||||
import os, re, itertools
|
||||
from base64 import b32decode
|
||||
import json
|
||||
from mock import Mock, patch
|
||||
from operator import (
|
||||
setitem,
|
||||
)
|
||||
from functools import (
|
||||
partial,
|
||||
)
|
||||
|
||||
from testtools.matchers import (
|
||||
Is,
|
||||
@ -84,7 +89,8 @@ class Node(testutil.SignalMixin, testutil.ReallyEqualMixin, AsyncTestCase):
|
||||
|
||||
def test_introducer_clients_unloadable(self):
|
||||
"""
|
||||
Error if introducers.yaml exists but we can't read it
|
||||
``create_introducer_clients`` raises ``EnvironmentError`` if
|
||||
``introducers.yaml`` exists but we can't read it.
|
||||
"""
|
||||
basedir = u"introducer.IntroducerNode.test_introducer_clients_unloadable"
|
||||
os.mkdir(basedir)
|
||||
@ -94,17 +100,10 @@ class Node(testutil.SignalMixin, testutil.ReallyEqualMixin, AsyncTestCase):
|
||||
f.write(u'---\n')
|
||||
os.chmod(yaml_fname, 0o000)
|
||||
self.addCleanup(lambda: os.chmod(yaml_fname, 0o700))
|
||||
# just mocking the yaml failure, as "yamlutil.safe_load" only
|
||||
# returns None on some platforms for unreadable files
|
||||
|
||||
with patch("allmydata.client.yamlutil") as p:
|
||||
p.safe_load = Mock(return_value=None)
|
||||
|
||||
fake_tub = Mock()
|
||||
config = read_config(basedir, "portnum")
|
||||
|
||||
with self.assertRaises(EnvironmentError):
|
||||
create_introducer_clients(config, fake_tub)
|
||||
config = read_config(basedir, "portnum")
|
||||
with self.assertRaises(EnvironmentError):
|
||||
create_introducer_clients(config, Tub())
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_furl(self):
|
||||
@ -1037,23 +1036,53 @@ class Signatures(SyncTestCase):
|
||||
unsign_from_foolscap, (bad_msg, sig, b"v999-key"))
|
||||
|
||||
def test_unsigned_announcement(self):
|
||||
ed25519.verifying_key_from_string(b"pub-v0-wodst6ly4f7i7akt2nxizsmmy2rlmer6apltl56zctn67wfyu5tq")
|
||||
mock_tub = Mock()
|
||||
"""
|
||||
An incorrectly signed announcement is not delivered to subscribers.
|
||||
"""
|
||||
private_key, public_key = ed25519.create_signing_keypair()
|
||||
public_key_str = ed25519.string_from_verifying_key(public_key)
|
||||
|
||||
ic = IntroducerClient(
|
||||
mock_tub,
|
||||
Tub(),
|
||||
"pb://",
|
||||
u"fake_nick",
|
||||
"0.0.0",
|
||||
"1.2.3",
|
||||
(0, u"i am a nonce"),
|
||||
"invalid",
|
||||
FilePath(self.mktemp()),
|
||||
)
|
||||
received = {}
|
||||
ic.subscribe_to("good-stuff", partial(setitem, received))
|
||||
|
||||
# Deliver a good message to prove our test code is valid.
|
||||
ann = {"service-name": "good-stuff", "payload": "hello"}
|
||||
ann_t = sign_to_foolscap(ann, private_key)
|
||||
ic.got_announcements([ann_t])
|
||||
|
||||
self.assertEqual(
|
||||
{public_key_str[len("pub-"):]: ann},
|
||||
received,
|
||||
)
|
||||
received.clear()
|
||||
|
||||
# Now deliver one without a valid signature and observe that it isn't
|
||||
# delivered to the subscriber.
|
||||
ann = {"service-name": "good-stuff", "payload": "bad stuff"}
|
||||
(msg, sig, key) = sign_to_foolscap(ann, private_key)
|
||||
# Drop a base32 word from the middle of the key to invalidate the
|
||||
# signature.
|
||||
sig_a = bytearray(sig)
|
||||
sig_a[20:22] = []
|
||||
sig = bytes(sig_a)
|
||||
ann_t = (msg, sig, key)
|
||||
ic.got_announcements([ann_t])
|
||||
|
||||
# The received announcements dict should remain empty because we
|
||||
# should not receive the announcement with the invalid signature.
|
||||
self.assertEqual(
|
||||
{},
|
||||
received,
|
||||
)
|
||||
self.assertEqual(0, ic._debug_counts["inbound_announcement"])
|
||||
ic.got_announcements([
|
||||
(b"message", b"v0-aaaaaaa", b"v0-wodst6ly4f7i7akt2nxizsmmy2rlmer6apltl56zctn67wfyu5tq")
|
||||
])
|
||||
# we should have rejected this announcement due to a bad signature
|
||||
self.assertEqual(0, ic._debug_counts["inbound_announcement"])
|
||||
|
||||
|
||||
# add tests of StorageFarmBroker: if it receives duplicate announcements, it
|
||||
|
@ -101,3 +101,56 @@ class Observer(unittest.TestCase):
|
||||
d.addCallback(_step2)
|
||||
d.addCallback(_check2)
|
||||
return d
|
||||
|
||||
def test_observer_list_reentrant(self):
|
||||
"""
|
||||
``ObserverList`` is reentrant.
|
||||
"""
|
||||
observed = []
|
||||
|
||||
def observer_one():
|
||||
obs.unsubscribe(observer_one)
|
||||
|
||||
def observer_two():
|
||||
observed.append(None)
|
||||
|
||||
obs = observer.ObserverList()
|
||||
obs.subscribe(observer_one)
|
||||
obs.subscribe(observer_two)
|
||||
obs.notify()
|
||||
|
||||
self.assertEqual([None], observed)
|
||||
|
||||
def test_observer_list_observer_errors(self):
|
||||
"""
|
||||
An error in an earlier observer does not prevent notification from being
|
||||
delivered to a later observer.
|
||||
"""
|
||||
observed = []
|
||||
|
||||
def observer_one():
|
||||
raise Exception("Some problem here")
|
||||
|
||||
def observer_two():
|
||||
observed.append(None)
|
||||
|
||||
obs = observer.ObserverList()
|
||||
obs.subscribe(observer_one)
|
||||
obs.subscribe(observer_two)
|
||||
obs.notify()
|
||||
|
||||
self.assertEqual([None], observed)
|
||||
self.assertEqual(1, len(self.flushLoggedErrors(Exception)))
|
||||
|
||||
def test_observer_list_propagate_keyboardinterrupt(self):
|
||||
"""
|
||||
``KeyboardInterrupt`` escapes ``ObserverList.notify``.
|
||||
"""
|
||||
def observer_one():
|
||||
raise KeyboardInterrupt()
|
||||
|
||||
obs = observer.ObserverList()
|
||||
obs.subscribe(observer_one)
|
||||
|
||||
with self.assertRaises(KeyboardInterrupt):
|
||||
obs.notify()
|
||||
|
@ -1,3 +1,14 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from twisted.trial import unittest
|
||||
from twisted.application import service
|
||||
|
@ -70,7 +70,7 @@ def renderJSON(resource):
|
||||
"""
|
||||
Render a JSON from the given resource.
|
||||
"""
|
||||
return render(resource, {"t": ["json"]})
|
||||
return render(resource, {b"t": [b"json"]})
|
||||
|
||||
class MyBucketCountingCrawler(BucketCountingCrawler):
|
||||
def finished_prefix(self, cycle, prefix):
|
||||
|
@ -349,6 +349,10 @@ class Provider(unittest.TestCase):
|
||||
cfs2.assert_called_with(reactor, ep_desc)
|
||||
|
||||
def test_handler_socks_endpoint(self):
|
||||
"""
|
||||
If not configured otherwise, the Tor provider returns a Socks-based
|
||||
handler.
|
||||
"""
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
tor.socks_endpoint = mock.Mock(return_value=handler)
|
||||
@ -365,6 +369,46 @@ class Provider(unittest.TestCase):
|
||||
tor.socks_endpoint.assert_called_with(ep)
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_socks_unix_endpoint(self):
|
||||
"""
|
||||
``socks.port`` can be configured as a UNIX client endpoint.
|
||||
"""
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
tor.socks_endpoint = mock.Mock(return_value=handler)
|
||||
ep = object()
|
||||
cfs = mock.Mock(return_value=ep)
|
||||
reactor = object()
|
||||
|
||||
with mock_tor(tor):
|
||||
p = tor_provider.create(reactor,
|
||||
FakeConfig(**{"socks.port": "unix:path"}))
|
||||
with mock.patch("allmydata.util.tor_provider.clientFromString", cfs):
|
||||
h = p.get_tor_handler()
|
||||
cfs.assert_called_with(reactor, "unix:path")
|
||||
tor.socks_endpoint.assert_called_with(ep)
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_socks_tcp_endpoint(self):
|
||||
"""
|
||||
``socks.port`` can be configured as a UNIX client endpoint.
|
||||
"""
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
tor.socks_endpoint = mock.Mock(return_value=handler)
|
||||
ep = object()
|
||||
cfs = mock.Mock(return_value=ep)
|
||||
reactor = object()
|
||||
|
||||
with mock_tor(tor):
|
||||
p = tor_provider.create(reactor,
|
||||
FakeConfig(**{"socks.port": "tcp:127.0.0.1:1234"}))
|
||||
with mock.patch("allmydata.util.tor_provider.clientFromString", cfs):
|
||||
h = p.get_tor_handler()
|
||||
cfs.assert_called_with(reactor, "tcp:127.0.0.1:1234")
|
||||
tor.socks_endpoint.assert_called_with(ep)
|
||||
self.assertIs(h, handler)
|
||||
|
||||
def test_handler_control_endpoint(self):
|
||||
tor = mock.Mock()
|
||||
handler = object()
|
||||
|
@ -1,6 +1,17 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import print_function
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os.path, re, urllib
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
import os.path, re
|
||||
from urllib.parse import quote as url_quote
|
||||
import json
|
||||
from six.moves import StringIO
|
||||
|
||||
@ -37,7 +48,7 @@ DIR_HTML_TAG = '<html lang="en">'
|
||||
class CompletelyUnhandledError(Exception):
|
||||
pass
|
||||
|
||||
class ErrorBoom(object, resource.Resource):
|
||||
class ErrorBoom(resource.Resource, object):
|
||||
@render_exception
|
||||
def render(self, req):
|
||||
raise CompletelyUnhandledError("whoops")
|
||||
@ -47,32 +58,38 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def CHECK(self, ign, which, args, clientnum=0):
|
||||
fileurl = self.fileurls[which]
|
||||
url = fileurl + "?" + args
|
||||
return self.GET(url, method="POST", clientnum=clientnum)
|
||||
return self.GET_unicode(url, method="POST", clientnum=clientnum)
|
||||
|
||||
def GET_unicode(self, *args, **kwargs):
|
||||
"""Send an HTTP request, but convert result to Unicode string."""
|
||||
d = GridTestMixin.GET(self, *args, **kwargs)
|
||||
d.addCallback(str, "utf-8")
|
||||
return d
|
||||
|
||||
def test_filecheck(self):
|
||||
self.basedir = "web/Grid/filecheck"
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"1", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"1", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"2", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"2", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "dead")
|
||||
def _stash_mutable_uri(n, which):
|
||||
self.uris[which] = n.get_uri()
|
||||
assert isinstance(self.uris[which], str)
|
||||
assert isinstance(self.uris[which], bytes)
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData(DATA+"3")))
|
||||
c0.create_mutable_file(publish.MutableData(DATA+b"3")))
|
||||
d.addCallback(_stash_mutable_uri, "corrupt")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data("literal", convergence="")))
|
||||
c0.upload(upload.Data(b"literal", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
d.addCallback(lambda ign: c0.create_immutable_dirnode({}))
|
||||
d.addCallback(_stash_mutable_uri, "smalldir")
|
||||
@ -80,7 +97,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
def _clobber_shares(ignored):
|
||||
@ -203,28 +220,28 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"1", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"1", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"2", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"2", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "dead")
|
||||
def _stash_mutable_uri(n, which):
|
||||
self.uris[which] = n.get_uri()
|
||||
assert isinstance(self.uris[which], str)
|
||||
assert isinstance(self.uris[which], bytes)
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData(DATA+"3")))
|
||||
c0.create_mutable_file(publish.MutableData(DATA+b"3")))
|
||||
d.addCallback(_stash_mutable_uri, "corrupt")
|
||||
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
def _clobber_shares(ignored):
|
||||
@ -286,8 +303,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.set_up_grid()
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA+"1", convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA+b"1", convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
@ -295,7 +312,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
def _clobber_shares(ignored):
|
||||
@ -329,7 +346,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.fileurls = {}
|
||||
|
||||
# the future cap format may contain slashes, which must be tolerated
|
||||
expected_info_url = "uri/%s?t=info" % urllib.quote(unknown_rwcap,
|
||||
expected_info_url = "uri/%s?t=info" % url_quote(unknown_rwcap,
|
||||
safe="")
|
||||
|
||||
if immutable:
|
||||
@ -343,8 +360,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.rooturl = "uri/" + urllib.quote(n.get_uri())
|
||||
self.rourl = "uri/" + urllib.quote(n.get_readonly_uri())
|
||||
self.rooturl = "uri/" + url_quote(n.get_uri())
|
||||
self.rourl = "uri/" + url_quote(n.get_readonly_uri())
|
||||
if not immutable:
|
||||
return self.rootnode.set_node(name, future_node)
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
@ -352,18 +369,19 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
# make sure directory listing tolerates unknown nodes
|
||||
d.addCallback(lambda ign: self.GET(self.rooturl))
|
||||
def _check_directory_html(res, expected_type_suffix):
|
||||
pattern = re.compile(r'<td>\?%s</td>[ \t\n\r]*'
|
||||
'<td>%s</td>' % (expected_type_suffix, str(name)),
|
||||
pattern = re.compile(br'<td>\?%s</td>[ \t\n\r]*'
|
||||
b'<td>%s</td>' % (
|
||||
expected_type_suffix, name.encode("ascii")),
|
||||
re.DOTALL)
|
||||
self.failUnless(re.search(pattern, res), res)
|
||||
# find the More Info link for name, should be relative
|
||||
mo = re.search(r'<a href="([^"]+)">More Info</a>', res)
|
||||
mo = re.search(br'<a href="([^"]+)">More Info</a>', res)
|
||||
info_url = mo.group(1)
|
||||
self.failUnlessReallyEqual(info_url, "%s?t=info" % (str(name),))
|
||||
self.failUnlessReallyEqual(info_url, b"%s?t=info" % (name.encode("ascii"),))
|
||||
if immutable:
|
||||
d.addCallback(_check_directory_html, "-IMM")
|
||||
d.addCallback(_check_directory_html, b"-IMM")
|
||||
else:
|
||||
d.addCallback(_check_directory_html, "")
|
||||
d.addCallback(_check_directory_html, b"")
|
||||
|
||||
d.addCallback(lambda ign: self.GET(self.rooturl+"?t=json"))
|
||||
def _check_directory_json(res, expect_rw_uri):
|
||||
@ -383,7 +401,6 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_check_directory_json, expect_rw_uri=not immutable)
|
||||
|
||||
def _check_info(res, expect_rw_uri, expect_ro_uri):
|
||||
self.failUnlessIn("Object Type: <span>unknown</span>", res)
|
||||
if expect_rw_uri:
|
||||
self.failUnlessIn(unknown_rwcap, res)
|
||||
if expect_ro_uri:
|
||||
@ -393,6 +410,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.failUnlessIn(unknown_rocap, res)
|
||||
else:
|
||||
self.failIfIn(unknown_rocap, res)
|
||||
res = str(res, "utf-8")
|
||||
self.failUnlessIn("Object Type: <span>unknown</span>", res)
|
||||
self.failIfIn("Raw data as", res)
|
||||
self.failIfIn("Directory writecap", res)
|
||||
self.failIfIn("Checker Operations", res)
|
||||
@ -404,7 +423,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
|
||||
d.addCallback(lambda ign: self.GET(expected_info_url))
|
||||
d.addCallback(_check_info, expect_rw_uri=False, expect_ro_uri=False)
|
||||
d.addCallback(lambda ign: self.GET("%s/%s?t=info" % (self.rooturl, str(name))))
|
||||
d.addCallback(lambda ign: self.GET("%s/%s?t=info" % (self.rooturl, name)))
|
||||
d.addCallback(_check_info, expect_rw_uri=False, expect_ro_uri=True)
|
||||
|
||||
def _check_json(res, expect_rw_uri):
|
||||
@ -436,9 +455,9 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
# or not future_node was immutable.
|
||||
d.addCallback(lambda ign: self.GET(self.rourl))
|
||||
if immutable:
|
||||
d.addCallback(_check_directory_html, "-IMM")
|
||||
d.addCallback(_check_directory_html, b"-IMM")
|
||||
else:
|
||||
d.addCallback(_check_directory_html, "-RO")
|
||||
d.addCallback(_check_directory_html, b"-RO")
|
||||
|
||||
d.addCallback(lambda ign: self.GET(self.rourl+"?t=json"))
|
||||
d.addCallback(_check_directory_json, expect_rw_uri=False)
|
||||
@ -462,9 +481,9 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
|
||||
lonely_uri = "URI:LIT:n5xgk" # LIT for "one"
|
||||
mut_write_uri = "URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mut_read_uri = "URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
lonely_uri = b"URI:LIT:n5xgk" # LIT for "one"
|
||||
mut_write_uri = b"URI:SSK:vfvcbdfbszyrsaxchgevhmmlii:euw4iw7bbnkrrwpzuburbhppuxhc3gwxv26f6imekhz7zyw2ojnq"
|
||||
mut_read_uri = b"URI:SSK-RO:e3mdrzfwhoq42hy5ubcz6rp3o4:ybyibhnp3vvwuq2vaw2ckjmesgkklfs6ghxleztqidihjyofgw7q"
|
||||
|
||||
# This method tests mainly dirnode, but we'd have to duplicate code in order to
|
||||
# test the dirnode and web layers separately.
|
||||
@ -507,10 +526,10 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
rep = str(dn)
|
||||
self.failUnlessIn("RO-IMM", rep)
|
||||
cap = dn.get_cap()
|
||||
self.failUnlessIn("CHK", cap.to_string())
|
||||
self.failUnlessIn(b"CHK", cap.to_string())
|
||||
self.cap = cap
|
||||
self.rootnode = dn
|
||||
self.rooturl = "uri/" + urllib.quote(dn.get_uri())
|
||||
self.rooturl = "uri/" + url_quote(dn.get_uri())
|
||||
return download_to_data(dn._node)
|
||||
d.addCallback(_created)
|
||||
|
||||
@ -526,7 +545,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
entry = entries[0]
|
||||
(name_utf8, ro_uri, rwcapdata, metadata_s), subpos = split_netstring(entry, 4)
|
||||
name = name_utf8.decode("utf-8")
|
||||
self.failUnlessEqual(rwcapdata, "")
|
||||
self.failUnlessEqual(rwcapdata, b"")
|
||||
self.failUnlessIn(name, kids)
|
||||
(expected_child, ign) = kids[name]
|
||||
self.failUnlessReallyEqual(ro_uri, expected_child.get_readonly_uri())
|
||||
@ -553,13 +572,13 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(lambda ign: self.GET(self.rooturl))
|
||||
def _check_html(res):
|
||||
soup = BeautifulSoup(res, 'html5lib')
|
||||
self.failIfIn("URI:SSK", res)
|
||||
self.failIfIn(b"URI:SSK", res)
|
||||
found = False
|
||||
for td in soup.find_all(u"td"):
|
||||
if td.text != u"FILE":
|
||||
continue
|
||||
a = td.findNextSibling()(u"a")[0]
|
||||
self.assertIn(urllib.quote(lonely_uri), a[u"href"])
|
||||
self.assertIn(url_quote(lonely_uri), a[u"href"])
|
||||
self.assertEqual(u"lonely", a.text)
|
||||
self.assertEqual(a[u"rel"], [u"noreferrer"])
|
||||
self.assertEqual(u"{}".format(len("one")), td.findNextSibling().findNextSibling().text)
|
||||
@ -573,7 +592,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
if a.text == u"More Info"
|
||||
)
|
||||
self.assertEqual(1, len(infos))
|
||||
self.assertTrue(infos[0].endswith(urllib.quote(lonely_uri) + "?t=info"))
|
||||
self.assertTrue(infos[0].endswith(url_quote(lonely_uri) + "?t=info"))
|
||||
d.addCallback(_check_html)
|
||||
|
||||
# ... and in JSON.
|
||||
@ -596,12 +615,12 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=""))
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=b""))
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
def _stash_uri(fn, which):
|
||||
self.uris[which] = fn.get_uri()
|
||||
@ -609,13 +628,13 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"small",
|
||||
upload.Data("literal",
|
||||
convergence="")))
|
||||
upload.Data(b"literal",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"sick",
|
||||
upload.Data(DATA+"1",
|
||||
convergence="")))
|
||||
upload.Data(DATA+b"1",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
|
||||
# this tests that deep-check and stream-manifest will ignore
|
||||
@ -695,13 +714,13 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_stash_uri, "subdir")
|
||||
d.addCallback(lambda subdir_node:
|
||||
subdir_node.add_file(u"grandchild",
|
||||
upload.Data(DATA+"2",
|
||||
convergence="")))
|
||||
upload.Data(DATA+b"2",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "grandchild")
|
||||
|
||||
d.addCallback(lambda ign:
|
||||
self.delete_shares_numbered(self.uris["subdir"],
|
||||
range(1, 10)))
|
||||
list(range(1, 10))))
|
||||
|
||||
# root
|
||||
# root/good
|
||||
@ -770,30 +789,30 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=""))
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
return n.add_file(u"good", upload.Data(DATA, convergence=b""))
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
def _stash_uri(fn, which):
|
||||
self.uris[which] = fn.get_uri()
|
||||
d.addCallback(_stash_uri, "good")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"small",
|
||||
upload.Data("literal",
|
||||
convergence="")))
|
||||
upload.Data(b"literal",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"sick",
|
||||
upload.Data(DATA+"1",
|
||||
convergence="")))
|
||||
upload.Data(DATA+b"1",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "sick")
|
||||
#d.addCallback(lambda ign:
|
||||
# self.rootnode.add_file(u"dead",
|
||||
# upload.Data(DATA+"2",
|
||||
# convergence="")))
|
||||
# upload.Data(DATA+b"2",
|
||||
# convergence=b"")))
|
||||
#d.addCallback(_stash_uri, "dead")
|
||||
|
||||
#d.addCallback(lambda ign: c0.create_mutable_file("mutable"))
|
||||
@ -888,25 +907,25 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.set_up_grid(num_clients=2, oneshare=True)
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
DATA = "data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
DATA = b"data" * 100
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri(ur, which):
|
||||
self.uris[which] = ur.get_uri()
|
||||
d.addCallback(_stash_uri, "one")
|
||||
d.addCallback(lambda ign:
|
||||
c0.upload(upload.Data(DATA+"1", convergence="")))
|
||||
c0.upload(upload.Data(DATA+b"1", convergence=b"")))
|
||||
d.addCallback(_stash_uri, "two")
|
||||
def _stash_mutable_uri(n, which):
|
||||
self.uris[which] = n.get_uri()
|
||||
assert isinstance(self.uris[which], str)
|
||||
assert isinstance(self.uris[which], bytes)
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData(DATA+"2")))
|
||||
c0.create_mutable_file(publish.MutableData(DATA+b"2")))
|
||||
d.addCallback(_stash_mutable_uri, "mutable")
|
||||
|
||||
def _compute_fileurls(ignored):
|
||||
self.fileurls = {}
|
||||
for which in self.uris:
|
||||
self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
|
||||
self.fileurls[which] = "uri/" + url_quote(self.uris[which])
|
||||
d.addCallback(_compute_fileurls)
|
||||
|
||||
d.addCallback(self._count_leases, "one")
|
||||
@ -982,25 +1001,25 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
self.uris = {}
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root_and_create_file(n):
|
||||
self.rootnode = n
|
||||
self.uris["root"] = n.get_uri()
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
return n.add_file(u"one", upload.Data(DATA, convergence=""))
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
return n.add_file(u"one", upload.Data(DATA, convergence=b""))
|
||||
d.addCallback(_stash_root_and_create_file)
|
||||
def _stash_uri(fn, which):
|
||||
self.uris[which] = fn.get_uri()
|
||||
d.addCallback(_stash_uri, "one")
|
||||
d.addCallback(lambda ign:
|
||||
self.rootnode.add_file(u"small",
|
||||
upload.Data("literal",
|
||||
convergence="")))
|
||||
upload.Data(b"literal",
|
||||
convergence=b"")))
|
||||
d.addCallback(_stash_uri, "small")
|
||||
|
||||
d.addCallback(lambda ign:
|
||||
c0.create_mutable_file(publish.MutableData("mutable")))
|
||||
c0.create_mutable_file(publish.MutableData(b"mutable")))
|
||||
d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
|
||||
d.addCallback(_stash_uri, "mutable")
|
||||
|
||||
@ -1051,36 +1070,36 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
c0.encoding_params['happy'] = 2
|
||||
self.fileurls = {}
|
||||
DATA = "data" * 100
|
||||
DATA = b"data" * 100
|
||||
d = c0.create_dirnode()
|
||||
def _stash_root(n):
|
||||
self.fileurls["root"] = "uri/" + urllib.quote(n.get_uri())
|
||||
self.fileurls["root"] = "uri/" + url_quote(n.get_uri())
|
||||
self.fileurls["imaginary"] = self.fileurls["root"] + "/imaginary"
|
||||
return n
|
||||
d.addCallback(_stash_root)
|
||||
d.addCallback(lambda ign: c0.upload(upload.Data(DATA, convergence="")))
|
||||
d.addCallback(lambda ign: c0.upload(upload.Data(DATA, convergence=b"")))
|
||||
def _stash_bad(ur):
|
||||
self.fileurls["1share"] = "uri/" + urllib.quote(ur.get_uri())
|
||||
self.delete_shares_numbered(ur.get_uri(), range(1,10))
|
||||
self.fileurls["1share"] = "uri/" + url_quote(ur.get_uri())
|
||||
self.delete_shares_numbered(ur.get_uri(), list(range(1,10)))
|
||||
|
||||
u = uri.from_string(ur.get_uri())
|
||||
u.key = testutil.flip_bit(u.key, 0)
|
||||
baduri = u.to_string()
|
||||
self.fileurls["0shares"] = "uri/" + urllib.quote(baduri)
|
||||
self.fileurls["0shares"] = "uri/" + url_quote(baduri)
|
||||
d.addCallback(_stash_bad)
|
||||
d.addCallback(lambda ign: c0.create_dirnode())
|
||||
def _mangle_dirnode_1share(n):
|
||||
u = n.get_uri()
|
||||
url = self.fileurls["dir-1share"] = "uri/" + urllib.quote(u)
|
||||
url = self.fileurls["dir-1share"] = "uri/" + url_quote(u)
|
||||
self.fileurls["dir-1share-json"] = url + "?t=json"
|
||||
self.delete_shares_numbered(u, range(1,10))
|
||||
self.delete_shares_numbered(u, list(range(1,10)))
|
||||
d.addCallback(_mangle_dirnode_1share)
|
||||
d.addCallback(lambda ign: c0.create_dirnode())
|
||||
def _mangle_dirnode_0share(n):
|
||||
u = n.get_uri()
|
||||
url = self.fileurls["dir-0share"] = "uri/" + urllib.quote(u)
|
||||
url = self.fileurls["dir-0share"] = "uri/" + url_quote(u)
|
||||
self.fileurls["dir-0share-json"] = url + "?t=json"
|
||||
self.delete_shares_numbered(u, range(0,10))
|
||||
self.delete_shares_numbered(u, list(range(0,10)))
|
||||
d.addCallback(_mangle_dirnode_0share)
|
||||
|
||||
# NotEnoughSharesError should be reported sensibly, with a
|
||||
@ -1092,6 +1111,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
410, "Gone", "NoSharesError",
|
||||
self.GET, self.fileurls["0shares"]))
|
||||
def _check_zero_shares(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
body = " ".join(body.strip().split())
|
||||
exp = ("NoSharesError: no shares could be found. "
|
||||
@ -1100,7 +1120,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
"severe corruption. You should perform a filecheck on "
|
||||
"this object to learn more. The full error message is: "
|
||||
"no shares (need 3). Last failure: None")
|
||||
self.failUnlessReallyEqual(exp, body)
|
||||
self.assertEqual(exp, body)
|
||||
d.addCallback(_check_zero_shares)
|
||||
|
||||
|
||||
@ -1109,6 +1129,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
410, "Gone", "NotEnoughSharesError",
|
||||
self.GET, self.fileurls["1share"]))
|
||||
def _check_one_share(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
body = " ".join(body.strip().split())
|
||||
msgbase = ("NotEnoughSharesError: This indicates that some "
|
||||
@ -1133,10 +1154,11 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
404, "Not Found", None,
|
||||
self.GET, self.fileurls["imaginary"]))
|
||||
def _missing_child(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failUnlessIn("No such child: imaginary", body)
|
||||
d.addCallback(_missing_child)
|
||||
|
||||
d.addCallback(lambda ignored: self.GET(self.fileurls["dir-0share"]))
|
||||
d.addCallback(lambda ignored: self.GET_unicode(self.fileurls["dir-0share"]))
|
||||
def _check_0shares_dir_html(body):
|
||||
self.failUnlessIn(DIR_HTML_TAG, body)
|
||||
# we should see the regular page, but without the child table or
|
||||
@ -1155,7 +1177,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.failUnlessIn("No upload forms: directory is unreadable", body)
|
||||
d.addCallback(_check_0shares_dir_html)
|
||||
|
||||
d.addCallback(lambda ignored: self.GET(self.fileurls["dir-1share"]))
|
||||
d.addCallback(lambda ignored: self.GET_unicode(self.fileurls["dir-1share"]))
|
||||
def _check_1shares_dir_html(body):
|
||||
# at some point, we'll split UnrecoverableFileError into 0-shares
|
||||
# and some-shares like we did for immutable files (since there
|
||||
@ -1182,6 +1204,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET,
|
||||
self.fileurls["dir-0share-json"]))
|
||||
def _check_unrecoverable_file(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
body = " ".join(body.strip().split())
|
||||
exp = ("UnrecoverableFileError: the directory (or mutable file) "
|
||||
@ -1209,7 +1232,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
# attach a webapi child that throws a random error, to test how it
|
||||
# gets rendered.
|
||||
w = c0.getServiceNamed("webish")
|
||||
w.root.putChild("ERRORBOOM", ErrorBoom())
|
||||
w.root.putChild(b"ERRORBOOM", ErrorBoom())
|
||||
|
||||
# "Accept: */*" : should get a text/html stack trace
|
||||
# "Accept: text/plain" : should get a text/plain stack trace
|
||||
@ -1222,6 +1245,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, "ERRORBOOM",
|
||||
headers={"accept": "*/*"}))
|
||||
def _internal_error_html1(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failUnlessIn("<html>", "expected HTML, not '%s'" % body)
|
||||
d.addCallback(_internal_error_html1)
|
||||
|
||||
@ -1231,6 +1255,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, "ERRORBOOM",
|
||||
headers={"accept": "text/plain"}))
|
||||
def _internal_error_text2(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
self.failUnless(body.startswith("Traceback "), body)
|
||||
d.addCallback(_internal_error_text2)
|
||||
@ -1242,6 +1267,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, "ERRORBOOM",
|
||||
headers={"accept": CLI_accepts}))
|
||||
def _internal_error_text3(body):
|
||||
body = str(body, "utf-8")
|
||||
self.failIfIn("<html>", body)
|
||||
self.failUnless(body.startswith("Traceback "), body)
|
||||
d.addCallback(_internal_error_text3)
|
||||
@ -1251,7 +1277,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
500, "Internal Server Error", None,
|
||||
self.GET, "ERRORBOOM"))
|
||||
def _internal_error_html4(body):
|
||||
self.failUnlessIn("<html>", body)
|
||||
self.failUnlessIn(b"<html>", body)
|
||||
d.addCallback(_internal_error_html4)
|
||||
|
||||
def _flush_errors(res):
|
||||
@ -1269,12 +1295,12 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
c0 = self.g.clients[0]
|
||||
fn = c0.config.get_config_path("access.blacklist")
|
||||
self.uris = {}
|
||||
DATA = "off-limits " * 50
|
||||
DATA = b"off-limits " * 50
|
||||
|
||||
d = c0.upload(upload.Data(DATA, convergence=""))
|
||||
d = c0.upload(upload.Data(DATA, convergence=b""))
|
||||
def _stash_uri_and_create_dir(ur):
|
||||
self.uri = ur.get_uri()
|
||||
self.url = "uri/"+self.uri
|
||||
self.url = b"uri/"+self.uri
|
||||
u = uri.from_string_filenode(self.uri)
|
||||
self.si = u.get_storage_index()
|
||||
childnode = c0.create_node_from_uri(self.uri, None)
|
||||
@ -1283,9 +1309,9 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
def _stash_dir(node):
|
||||
self.dir_node = node
|
||||
self.dir_uri = node.get_uri()
|
||||
self.dir_url = "uri/"+self.dir_uri
|
||||
self.dir_url = b"uri/"+self.dir_uri
|
||||
d.addCallback(_stash_dir)
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url, followRedirect=True))
|
||||
d.addCallback(lambda ign: self.GET_unicode(self.dir_url, followRedirect=True))
|
||||
def _check_dir_html(body):
|
||||
self.failUnlessIn(DIR_HTML_TAG, body)
|
||||
self.failUnlessIn("blacklisted.txt</a>", body)
|
||||
@ -1298,7 +1324,7 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
f.write(" # this is a comment\n")
|
||||
f.write(" \n")
|
||||
f.write("\n") # also exercise blank lines
|
||||
f.write("%s %s\n" % (base32.b2a(self.si), "off-limits to you"))
|
||||
f.write("%s off-limits to you\n" % (str(base32.b2a(self.si), "ascii"),))
|
||||
f.close()
|
||||
# clients should be checking the blacklist each time, so we don't
|
||||
# need to restart the client
|
||||
@ -1309,14 +1335,14 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
self.GET, self.url))
|
||||
|
||||
# We should still be able to list the parent directory, in HTML...
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url, followRedirect=True))
|
||||
d.addCallback(lambda ign: self.GET_unicode(self.dir_url, followRedirect=True))
|
||||
def _check_dir_html2(body):
|
||||
self.failUnlessIn(DIR_HTML_TAG, body)
|
||||
self.failUnlessIn("blacklisted.txt</strike>", body)
|
||||
d.addCallback(_check_dir_html2)
|
||||
|
||||
# ... and in JSON (used by CLI).
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url+"?t=json", followRedirect=True))
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url+b"?t=json", followRedirect=True))
|
||||
def _check_dir_json(res):
|
||||
data = json.loads(res)
|
||||
self.failUnless(isinstance(data, list), data)
|
||||
@ -1355,14 +1381,14 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(_add_dir)
|
||||
def _get_dircap(dn):
|
||||
self.dir_si_b32 = base32.b2a(dn.get_storage_index())
|
||||
self.dir_url_base = "uri/"+dn.get_write_uri()
|
||||
self.dir_url_json1 = "uri/"+dn.get_write_uri()+"?t=json"
|
||||
self.dir_url_json2 = "uri/"+dn.get_write_uri()+"?t=json"
|
||||
self.dir_url_json_ro = "uri/"+dn.get_readonly_uri()+"?t=json"
|
||||
self.child_url = "uri/"+dn.get_readonly_uri()+"/child"
|
||||
self.dir_url_base = b"uri/"+dn.get_write_uri()
|
||||
self.dir_url_json1 = b"uri/"+dn.get_write_uri()+b"?t=json"
|
||||
self.dir_url_json2 = b"uri/"+dn.get_write_uri()+b"?t=json"
|
||||
self.dir_url_json_ro = b"uri/"+dn.get_readonly_uri()+b"?t=json"
|
||||
self.child_url = b"uri/"+dn.get_readonly_uri()+b"/child"
|
||||
d.addCallback(_get_dircap)
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url_base, followRedirect=True))
|
||||
d.addCallback(lambda body: self.failUnlessIn(DIR_HTML_TAG, body))
|
||||
d.addCallback(lambda body: self.failUnlessIn(DIR_HTML_TAG, str(body, "utf-8")))
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url_json1))
|
||||
d.addCallback(lambda res: json.loads(res)) # just check it decodes
|
||||
d.addCallback(lambda ign: self.GET(self.dir_url_json2))
|
||||
@ -1373,8 +1399,8 @@ class Grid(GridTestMixin, WebErrorMixin, ShouldFailMixin, testutil.ReallyEqualMi
|
||||
d.addCallback(lambda body: self.failUnlessEqual(DATA, body))
|
||||
|
||||
def _block_dir(ign):
|
||||
f = open(fn, "w")
|
||||
f.write("%s %s\n" % (self.dir_si_b32, "dir-off-limits to you"))
|
||||
f = open(fn, "wb")
|
||||
f.write(b"%s %s\n" % (self.dir_si_b32, b"dir-off-limits to you"))
|
||||
f.close()
|
||||
self.g.clients[0].blacklist.last_mtime -= 2.0
|
||||
d.addCallback(_block_dir)
|
||||
|
@ -746,7 +746,10 @@ class MultiFormatResourceTests(TrialTestCase):
|
||||
"<title>400 - Bad Format</title>", response_body,
|
||||
)
|
||||
self.assertIn(
|
||||
"Unknown t value: 'foo'", response_body,
|
||||
"Unknown t value:", response_body,
|
||||
)
|
||||
self.assertIn(
|
||||
"'foo'", response_body,
|
||||
)
|
||||
|
||||
|
||||
|
@ -1,6 +1,16 @@
|
||||
"""
|
||||
Tests for ``allmydata.webish``.
|
||||
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from uuid import (
|
||||
uuid4,
|
||||
@ -96,7 +106,7 @@ class TahoeLAFSRequestTests(SyncTestCase):
|
||||
])
|
||||
self._fields_test(
|
||||
b"POST",
|
||||
{b"content-type": b"multipart/form-data; boundary={}".format(boundary)},
|
||||
{b"content-type": b"multipart/form-data; boundary=" + bytes(boundary, 'ascii')},
|
||||
form_data.encode("ascii"),
|
||||
AfterPreprocessing(
|
||||
lambda fs: {
|
||||
@ -105,8 +115,8 @@ class TahoeLAFSRequestTests(SyncTestCase):
|
||||
in fs.keys()
|
||||
},
|
||||
Equals({
|
||||
b"foo": b"bar",
|
||||
b"baz": b"some file contents",
|
||||
"foo": "bar",
|
||||
"baz": b"some file contents",
|
||||
}),
|
||||
),
|
||||
)
|
||||
|
@ -1,3 +1,13 @@
|
||||
"""Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from zope.interface import implementer
|
||||
from twisted.internet import defer
|
||||
|
@ -27,6 +27,7 @@ PORTED_MODULES = [
|
||||
"allmydata.__main__",
|
||||
"allmydata._auto_deps",
|
||||
"allmydata._monkeypatch",
|
||||
"allmydata.blacklist",
|
||||
"allmydata.codec",
|
||||
"allmydata.crypto",
|
||||
"allmydata.crypto.aes",
|
||||
@ -34,6 +35,7 @@ PORTED_MODULES = [
|
||||
"allmydata.crypto.error",
|
||||
"allmydata.crypto.rsa",
|
||||
"allmydata.crypto.util",
|
||||
"allmydata.deep_stats",
|
||||
"allmydata.dirnode",
|
||||
"allmydata.hashtree",
|
||||
"allmydata.immutable.checker",
|
||||
@ -69,6 +71,7 @@ PORTED_MODULES = [
|
||||
"allmydata.mutable.servermap",
|
||||
"allmydata.node",
|
||||
"allmydata.nodemaker",
|
||||
"allmydata.stats",
|
||||
"allmydata.storage_client",
|
||||
"allmydata.storage.common",
|
||||
"allmydata.storage.crawler",
|
||||
@ -80,6 +83,7 @@ PORTED_MODULES = [
|
||||
"allmydata.storage.shares",
|
||||
"allmydata.test.no_network",
|
||||
"allmydata.test.mutable.util",
|
||||
"allmydata.unknown",
|
||||
"allmydata.uri",
|
||||
"allmydata.util._python3",
|
||||
"allmydata.util.abbreviate",
|
||||
@ -110,6 +114,7 @@ PORTED_MODULES = [
|
||||
"allmydata.util.spans",
|
||||
"allmydata.util.statistics",
|
||||
"allmydata.util.time_format",
|
||||
"allmydata.webish",
|
||||
]
|
||||
|
||||
PORTED_TEST_MODULES = [
|
||||
@ -166,6 +171,7 @@ PORTED_TEST_MODULES = [
|
||||
"allmydata.test.test_repairer",
|
||||
"allmydata.test.test_spans",
|
||||
"allmydata.test.test_statistics",
|
||||
"allmydata.test.test_stats",
|
||||
"allmydata.test.test_storage",
|
||||
"allmydata.test.test_storage_client",
|
||||
"allmydata.test.test_storage_web",
|
||||
@ -180,6 +186,8 @@ PORTED_TEST_MODULES = [
|
||||
"allmydata.test.test_uri",
|
||||
"allmydata.test.test_util",
|
||||
"allmydata.test.web.test_common",
|
||||
"allmydata.test.web.test_util",
|
||||
"allmydata.test.web.test_grid",
|
||||
"allmydata.test.web.test_status",
|
||||
"allmydata.test.web.test_util",
|
||||
"allmydata.test.web.test_webish",
|
||||
]
|
||||
|
@ -142,7 +142,9 @@ def a2b(cs):
|
||||
# Add padding back, to make Python's base64 module happy:
|
||||
while (len(cs) * 5) % 8 != 0:
|
||||
cs += b"="
|
||||
return base64.b32decode(cs)
|
||||
# Let newbytes come through and still work on Python 2, where the base64
|
||||
# module gets confused by them.
|
||||
return base64.b32decode(backwardscompat_bytes(cs))
|
||||
|
||||
|
||||
__all__ = ["b2a", "a2b", "b2a_or_none", "BASE32CHAR_3bits", "BASE32CHAR_1bits", "BASE32CHAR", "BASE32STR_anybytes", "could_be_base32_encoded"]
|
||||
|
@ -16,6 +16,9 @@ if PY2:
|
||||
import weakref
|
||||
from twisted.internet import defer
|
||||
from foolscap.api import eventually
|
||||
from twisted.logger import (
|
||||
Logger,
|
||||
)
|
||||
|
||||
"""The idiom we use is for the observed object to offer a method named
|
||||
'when_something', which returns a deferred. That deferred will be fired when
|
||||
@ -97,7 +100,10 @@ class LazyOneShotObserverList(OneShotObserverList):
|
||||
self._fire(self._get_result())
|
||||
|
||||
class ObserverList(object):
|
||||
"""A simple class to distribute events to a number of subscribers."""
|
||||
"""
|
||||
Immediately distribute events to a number of subscribers.
|
||||
"""
|
||||
_logger = Logger()
|
||||
|
||||
def __init__(self):
|
||||
self._watchers = []
|
||||
@ -109,8 +115,11 @@ class ObserverList(object):
|
||||
self._watchers.remove(observer)
|
||||
|
||||
def notify(self, *args, **kwargs):
|
||||
for o in self._watchers:
|
||||
eventually(o, *args, **kwargs)
|
||||
for o in self._watchers[:]:
|
||||
try:
|
||||
o(*args, **kwargs)
|
||||
except Exception:
|
||||
self._logger.failure("While notifying {o!r}", o=o)
|
||||
|
||||
class EventStreamObserver(object):
|
||||
"""A simple class to distribute multiple events to a single subscriber.
|
||||
|
@ -17,23 +17,7 @@ from ..interfaces import (
|
||||
IAddressFamily,
|
||||
)
|
||||
|
||||
def create(reactor, config):
|
||||
"""
|
||||
Create a new _Provider service (this is an IService so must be
|
||||
hooked up to a parent or otherwise started).
|
||||
|
||||
If foolscap.connections.tor or txtorcon are not installed, then
|
||||
Provider.get_tor_handler() will return None. If tahoe.cfg wants
|
||||
to start an onion service too, then this `create()` method will
|
||||
throw a nice error (and startService will throw an ugly error).
|
||||
"""
|
||||
provider = _Provider(config, reactor)
|
||||
provider.check_onion_config()
|
||||
return provider
|
||||
|
||||
|
||||
def _import_tor():
|
||||
# this exists to be overridden by unit tests
|
||||
try:
|
||||
from foolscap.connections import tor
|
||||
return tor
|
||||
@ -47,6 +31,25 @@ def _import_txtorcon():
|
||||
except ImportError: # pragma: no cover
|
||||
return None
|
||||
|
||||
def create(reactor, config, import_tor=None, import_txtorcon=None):
|
||||
"""
|
||||
Create a new _Provider service (this is an IService so must be
|
||||
hooked up to a parent or otherwise started).
|
||||
|
||||
If foolscap.connections.tor or txtorcon are not installed, then
|
||||
Provider.get_tor_handler() will return None. If tahoe.cfg wants
|
||||
to start an onion service too, then this `create()` method will
|
||||
throw a nice error (and startService will throw an ugly error).
|
||||
"""
|
||||
if import_tor is None:
|
||||
import_tor = _import_tor
|
||||
if import_txtorcon is None:
|
||||
import_txtorcon = _import_txtorcon
|
||||
provider = _Provider(config, reactor, import_tor(), import_txtorcon())
|
||||
provider.check_onion_config()
|
||||
return provider
|
||||
|
||||
|
||||
def data_directory(private_dir):
|
||||
return os.path.join(private_dir, "tor-statedir")
|
||||
|
||||
@ -217,14 +220,14 @@ def create_config(reactor, cli_config):
|
||||
|
||||
@implementer(IAddressFamily)
|
||||
class _Provider(service.MultiService):
|
||||
def __init__(self, config, reactor):
|
||||
def __init__(self, config, reactor, tor, txtorcon):
|
||||
service.MultiService.__init__(self)
|
||||
self._config = config
|
||||
self._tor_launched = None
|
||||
self._onion_ehs = None
|
||||
self._onion_tor_control_proto = None
|
||||
self._tor = _import_tor()
|
||||
self._txtorcon = _import_txtorcon()
|
||||
self._tor = tor
|
||||
self._txtorcon = txtorcon
|
||||
self._reactor = reactor
|
||||
|
||||
def _get_tor_config(self, *args, **kwargs):
|
||||
|
@ -1,4 +1,5 @@
|
||||
from past.builtins import unicode
|
||||
from six import ensure_text, ensure_str
|
||||
|
||||
import time
|
||||
import json
|
||||
@ -99,17 +100,19 @@ def get_filenode_metadata(filenode):
|
||||
|
||||
def boolean_of_arg(arg):
|
||||
# TODO: ""
|
||||
arg = ensure_text(arg)
|
||||
if arg.lower() not in ("true", "t", "1", "false", "f", "0", "on", "off"):
|
||||
raise WebError("invalid boolean argument: %r" % (arg,), http.BAD_REQUEST)
|
||||
return arg.lower() in ("true", "t", "1", "on")
|
||||
|
||||
def parse_replace_arg(replace):
|
||||
replace = ensure_text(replace)
|
||||
if replace.lower() == "only-files":
|
||||
return replace
|
||||
try:
|
||||
return boolean_of_arg(replace)
|
||||
except WebError:
|
||||
raise WebError("invalid replace= argument: %r" % (replace,), http.BAD_REQUEST)
|
||||
raise WebError("invalid replace= argument: %r" % (ensure_str(replace),), http.BAD_REQUEST)
|
||||
|
||||
|
||||
def get_format(req, default="CHK"):
|
||||
@ -118,11 +121,11 @@ def get_format(req, default="CHK"):
|
||||
if boolean_of_arg(get_arg(req, "mutable", "false")):
|
||||
return "SDMF"
|
||||
return default
|
||||
if arg.upper() == "CHK":
|
||||
if arg.upper() == b"CHK":
|
||||
return "CHK"
|
||||
elif arg.upper() == "SDMF":
|
||||
elif arg.upper() == b"SDMF":
|
||||
return "SDMF"
|
||||
elif arg.upper() == "MDMF":
|
||||
elif arg.upper() == b"MDMF":
|
||||
return "MDMF"
|
||||
else:
|
||||
raise WebError("Unknown format: %s, I know CHK, SDMF, MDMF" % arg,
|
||||
|
@ -4,6 +4,8 @@ Common utilities that are available from Python 3.
|
||||
Can eventually be merged back into allmydata.web.common.
|
||||
"""
|
||||
|
||||
from past.builtins import unicode
|
||||
|
||||
try:
|
||||
from typing import Optional
|
||||
except ImportError:
|
||||
@ -28,7 +30,13 @@ def get_arg(req, argname, default=None, multiple=False):
|
||||
empty), starting with all those in the query args.
|
||||
|
||||
:param TahoeLAFSRequest req: The request to consider.
|
||||
|
||||
:return: Either bytes or tuple of bytes.
|
||||
"""
|
||||
if isinstance(argname, unicode):
|
||||
argname = argname.encode("utf-8")
|
||||
if isinstance(default, unicode):
|
||||
default = default.encode("utf-8")
|
||||
results = []
|
||||
if argname in req.args:
|
||||
results.extend(req.args[argname])
|
||||
@ -67,6 +75,9 @@ class MultiFormatResource(resource.Resource, object):
|
||||
:return: The result of the selected renderer.
|
||||
"""
|
||||
t = get_arg(req, self.formatArgument, self.formatDefault)
|
||||
# It's either bytes or None.
|
||||
if isinstance(t, bytes):
|
||||
t = unicode(t, "ascii")
|
||||
renderer = self._get_renderer(t)
|
||||
return renderer(req)
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
from past.builtins import unicode
|
||||
|
||||
import json
|
||||
import urllib
|
||||
from urllib.parse import quote as url_quote
|
||||
from datetime import timedelta
|
||||
|
||||
from zope.interface import implementer
|
||||
@ -20,7 +20,7 @@ from twisted.web.template import (
|
||||
from hyperlink import URL
|
||||
from twisted.python.filepath import FilePath
|
||||
|
||||
from allmydata.util import base32
|
||||
from allmydata.util import base32, jsonbytes as json
|
||||
from allmydata.util.encodingutil import (
|
||||
to_bytes,
|
||||
quote_output,
|
||||
@ -109,7 +109,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
# or no further children) renders "this" page. We also need
|
||||
# to reject "/uri/URI:DIR2:..//", so we look at postpath.
|
||||
name = name.decode('utf8')
|
||||
if not name and req.postpath != ['']:
|
||||
if not name and req.postpath != [b'']:
|
||||
return self
|
||||
|
||||
# Rejecting URIs that contain empty path pieces (for example:
|
||||
@ -135,7 +135,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
terminal = (req.prepath + req.postpath)[-1].decode('utf8') == name
|
||||
nonterminal = not terminal #len(req.postpath) > 0
|
||||
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
if isinstance(node_or_failure, Failure):
|
||||
f = node_or_failure
|
||||
f.trap(NoSuchChildError)
|
||||
@ -217,7 +217,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
@render_exception
|
||||
def render_GET(self, req):
|
||||
# This is where all of the directory-related ?t=* code goes.
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, b"t", b"").strip(), "ascii")
|
||||
|
||||
# t=info contains variable ophandles, t=rename-form contains the name
|
||||
# of the child being renamed. Neither is allowed an ETag.
|
||||
@ -225,7 +225,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
if not self.node.is_mutable() and t in FIXED_OUTPUT_TYPES:
|
||||
si = self.node.get_storage_index()
|
||||
if si and req.setETag('DIR:%s-%s' % (base32.b2a(si), t or "")):
|
||||
return ""
|
||||
return b""
|
||||
|
||||
if not t:
|
||||
# render the directory as HTML
|
||||
@ -255,7 +255,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
|
||||
@render_exception
|
||||
def render_PUT(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = parse_replace_arg(get_arg(req, "replace", "true"))
|
||||
|
||||
if t == "mkdir":
|
||||
@ -275,7 +275,7 @@ class DirectoryNodeHandler(ReplaceMeMixin, Resource, object):
|
||||
|
||||
@render_exception
|
||||
def render_POST(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, b"t", b"").strip(), "ascii")
|
||||
|
||||
if t == "mkdir":
|
||||
d = self._POST_mkdir(req)
|
||||
@ -732,7 +732,7 @@ class DirectoryAsHTML(Element):
|
||||
return ""
|
||||
rocap = self.node.get_readonly_uri()
|
||||
root = get_root(req)
|
||||
uri_link = "%s/uri/%s/" % (root, urllib.quote(rocap))
|
||||
uri_link = "%s/uri/%s/" % (root, url_quote(rocap))
|
||||
return tag(tags.a("Read-Only Version", href=uri_link))
|
||||
|
||||
@renderer
|
||||
@ -754,10 +754,10 @@ class DirectoryAsHTML(Element):
|
||||
called by the 'children' renderer)
|
||||
"""
|
||||
name = name.encode("utf-8")
|
||||
nameurl = urllib.quote(name, safe="") # encode any slashes too
|
||||
nameurl = url_quote(name, safe="") # encode any slashes too
|
||||
|
||||
root = get_root(req)
|
||||
here = "{}/uri/{}/".format(root, urllib.quote(self.node.get_uri()))
|
||||
here = "{}/uri/{}/".format(root, url_quote(self.node.get_uri()))
|
||||
if self.node.is_unknown() or self.node.is_readonly():
|
||||
unlink = "-"
|
||||
rename = "-"
|
||||
@ -814,7 +814,7 @@ class DirectoryAsHTML(Element):
|
||||
|
||||
assert IFilesystemNode.providedBy(target), target
|
||||
target_uri = target.get_uri() or ""
|
||||
quoted_uri = urllib.quote(target_uri, safe="") # escape slashes too
|
||||
quoted_uri = url_quote(target_uri, safe="") # escape slashes too
|
||||
|
||||
if IMutableFileNode.providedBy(target):
|
||||
# to prevent javascript in displayed .html files from stealing a
|
||||
@ -835,7 +835,7 @@ class DirectoryAsHTML(Element):
|
||||
|
||||
elif IDirectoryNode.providedBy(target):
|
||||
# directory
|
||||
uri_link = "%s/uri/%s/" % (root, urllib.quote(target_uri))
|
||||
uri_link = "%s/uri/%s/" % (root, url_quote(target_uri))
|
||||
slots["filename"] = tags.a(name, href=uri_link)
|
||||
if not target.is_mutable():
|
||||
dirtype = "DIR-IMM"
|
||||
@ -871,7 +871,7 @@ class DirectoryAsHTML(Element):
|
||||
slots["size"] = "-"
|
||||
# use a directory-relative info link, so we can extract both the
|
||||
# writecap and the readcap
|
||||
info_link = "%s?t=info" % urllib.quote(name)
|
||||
info_link = "%s?t=info" % url_quote(name)
|
||||
|
||||
if info_link:
|
||||
slots["info"] = tags.a("More Info", href=info_link)
|
||||
@ -888,7 +888,7 @@ class DirectoryAsHTML(Element):
|
||||
# because action="." doesn't get us back to the dir page (but
|
||||
# instead /uri itself)
|
||||
root = get_root(req)
|
||||
here = "{}/uri/{}/".format(root, urllib.quote(self.node.get_uri()))
|
||||
here = "{}/uri/{}/".format(root, url_quote(self.node.get_uri()))
|
||||
|
||||
if self.node.is_readonly():
|
||||
return tags.div("No upload forms: directory is read-only")
|
||||
@ -1005,7 +1005,7 @@ def _directory_json_metadata(req, dirnode):
|
||||
d = dirnode.list()
|
||||
def _got(children):
|
||||
kids = {}
|
||||
for name, (childnode, metadata) in children.iteritems():
|
||||
for name, (childnode, metadata) in children.items():
|
||||
assert IFilesystemNode.providedBy(childnode), childnode
|
||||
rw_uri = childnode.get_write_uri()
|
||||
ro_uri = childnode.get_readonly_uri()
|
||||
@ -1166,13 +1166,13 @@ def _cap_to_link(root, path, cap):
|
||||
if isinstance(cap_obj, (CHKFileURI, WriteableSSKFileURI, ReadonlySSKFileURI)):
|
||||
uri_link = root_url.child(
|
||||
u"file",
|
||||
u"{}".format(urllib.quote(cap)),
|
||||
u"{}".format(urllib.quote(path[-1])),
|
||||
u"{}".format(url_quote(cap)),
|
||||
u"{}".format(url_quote(path[-1])),
|
||||
)
|
||||
else:
|
||||
uri_link = root_url.child(
|
||||
u"uri",
|
||||
u"{}".format(urllib.quote(cap, safe="")),
|
||||
u"{}".format(url_quote(cap, safe="")),
|
||||
)
|
||||
return tags.a(cap, href=uri_link.to_text())
|
||||
else:
|
||||
@ -1363,7 +1363,7 @@ class ManifestStreamer(dirnode.DeepStats):
|
||||
|
||||
j = json.dumps(d, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
|
||||
def finish(self):
|
||||
stats = dirnode.DeepStats.get_results(self)
|
||||
@ -1372,8 +1372,8 @@ class ManifestStreamer(dirnode.DeepStats):
|
||||
}
|
||||
j = json.dumps(d, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
return ""
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
return b""
|
||||
|
||||
@implementer(IPushProducer)
|
||||
class DeepCheckStreamer(dirnode.DeepStats):
|
||||
@ -1441,7 +1441,7 @@ class DeepCheckStreamer(dirnode.DeepStats):
|
||||
def write_line(self, data):
|
||||
j = json.dumps(data, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
|
||||
def finish(self):
|
||||
stats = dirnode.DeepStats.get_results(self)
|
||||
@ -1450,8 +1450,8 @@ class DeepCheckStreamer(dirnode.DeepStats):
|
||||
}
|
||||
j = json.dumps(d, ensure_ascii=True)
|
||||
assert "\n" not in j
|
||||
self.req.write(j+"\n")
|
||||
return ""
|
||||
self.req.write(j.encode("utf-8")+b"\n")
|
||||
return b""
|
||||
|
||||
|
||||
class UnknownNodeHandler(Resource, object):
|
||||
@ -1464,7 +1464,7 @@ class UnknownNodeHandler(Resource, object):
|
||||
|
||||
@render_exception
|
||||
def render_GET(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, "t", "").strip(), "ascii")
|
||||
if t == "info":
|
||||
return MoreInfo(self.node)
|
||||
if t == "json":
|
||||
|
@ -1,5 +1,4 @@
|
||||
|
||||
import json
|
||||
from past.builtins import unicode, long
|
||||
|
||||
from twisted.web import http, static
|
||||
from twisted.internet import defer
|
||||
@ -41,6 +40,8 @@ from allmydata.web.check_results import (
|
||||
LiteralCheckResultsRenderer,
|
||||
)
|
||||
from allmydata.web.info import MoreInfo
|
||||
from allmydata.util import jsonbytes as json
|
||||
|
||||
|
||||
class ReplaceMeMixin(object):
|
||||
def replace_me_with_a_child(self, req, client, replace):
|
||||
@ -117,7 +118,7 @@ class PlaceHolderNodeHandler(Resource, ReplaceMeMixin):
|
||||
|
||||
@render_exception
|
||||
def render_PUT(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = parse_replace_arg(get_arg(req, "replace", "true"))
|
||||
|
||||
assert self.parentnode and self.name
|
||||
@ -133,9 +134,9 @@ class PlaceHolderNodeHandler(Resource, ReplaceMeMixin):
|
||||
|
||||
@render_exception
|
||||
def render_POST(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
replace = boolean_of_arg(get_arg(req, "replace", "true"))
|
||||
if t == "upload":
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = boolean_of_arg(get_arg(req, b"replace", b"true"))
|
||||
if t == b"upload":
|
||||
# like PUT, but get the file data from an HTML form's input field.
|
||||
# We could get here from POST /uri/mutablefilecap?t=upload,
|
||||
# or POST /uri/path/file?t=upload, or
|
||||
@ -179,7 +180,7 @@ class FileNodeHandler(Resource, ReplaceMeMixin, object):
|
||||
|
||||
@render_exception
|
||||
def render_GET(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = unicode(get_arg(req, b"t", b"").strip(), "ascii")
|
||||
|
||||
# t=info contains variable ophandles, so is not allowed an ETag.
|
||||
FIXED_OUTPUT_TYPES = ["", "json", "uri", "readonly-uri"]
|
||||
@ -237,19 +238,19 @@ class FileNodeHandler(Resource, ReplaceMeMixin, object):
|
||||
|
||||
@render_exception
|
||||
def render_HEAD(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
if t:
|
||||
raise WebError("HEAD file: bad t=%s" % t)
|
||||
filename = get_arg(req, "filename", self.name) or "unknown"
|
||||
filename = get_arg(req, b"filename", self.name) or "unknown"
|
||||
d = self.node.get_best_readable_version()
|
||||
d.addCallback(lambda dn: FileDownloader(dn, filename))
|
||||
return d
|
||||
|
||||
@render_exception
|
||||
def render_PUT(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
replace = parse_replace_arg(get_arg(req, "replace", "true"))
|
||||
offset = parse_offset_arg(get_arg(req, "offset", None))
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = parse_replace_arg(get_arg(req, b"replace", b"true"))
|
||||
offset = parse_offset_arg(get_arg(req, b"offset", None))
|
||||
|
||||
if not t:
|
||||
if not replace:
|
||||
@ -290,11 +291,11 @@ class FileNodeHandler(Resource, ReplaceMeMixin, object):
|
||||
|
||||
@render_exception
|
||||
def render_POST(self, req):
|
||||
t = get_arg(req, "t", "").strip()
|
||||
replace = boolean_of_arg(get_arg(req, "replace", "true"))
|
||||
if t == "check":
|
||||
t = get_arg(req, b"t", b"").strip()
|
||||
replace = boolean_of_arg(get_arg(req, b"replace", b"true"))
|
||||
if t == b"check":
|
||||
d = self._POST_check(req)
|
||||
elif t == "upload":
|
||||
elif t == b"upload":
|
||||
# like PUT, but get the file data from an HTML form's input field
|
||||
# We could get here from POST /uri/mutablefilecap?t=upload,
|
||||
# or POST /uri/path/file?t=upload, or
|
||||
|
@ -5,8 +5,7 @@ from twisted.web.template import Element, XMLFile, renderElement, renderer
|
||||
from twisted.python.filepath import FilePath
|
||||
from twisted.web import static
|
||||
import allmydata
|
||||
import json
|
||||
from allmydata.util import idlib
|
||||
from allmydata.util import idlib, jsonbytes as json
|
||||
from allmydata.web.common import (
|
||||
render_time,
|
||||
MultiFormatResource,
|
||||
|
@ -1,6 +1,5 @@
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import urllib
|
||||
|
||||
from hyperlink import DecodedURL, URL
|
||||
@ -21,7 +20,7 @@ from twisted.web.template import (
|
||||
)
|
||||
|
||||
import allmydata # to display import path
|
||||
from allmydata.util import log
|
||||
from allmydata.util import log, jsonbytes as json
|
||||
from allmydata.interfaces import IFileNode
|
||||
from allmydata.web import (
|
||||
filenode,
|
||||
@ -158,7 +157,9 @@ class URIHandler(resource.Resource, object):
|
||||
try:
|
||||
node = self.client.create_node_from_uri(name)
|
||||
return directory.make_handler_for(node, self.client)
|
||||
except (TypeError, AssertionError):
|
||||
except (TypeError, AssertionError) as e:
|
||||
log.msg(format="Failed to parse cap, perhaps due to bug: %(e)s",
|
||||
e=e, level=log.WEIRD)
|
||||
raise WebError(
|
||||
"'{}' is not a valid file- or directory- cap".format(name)
|
||||
)
|
||||
@ -226,7 +227,10 @@ class Root(MultiFormatResource):
|
||||
self._client = client
|
||||
self._now_fn = now_fn
|
||||
|
||||
self.putChild("uri", URIHandler(client))
|
||||
# Children need to be bytes; for now just doing these to make specific
|
||||
# tests pass on Python 3, but eventually will do all them when this
|
||||
# module is ported to Python 3 (if not earlier).
|
||||
self.putChild(b"uri", URIHandler(client))
|
||||
self.putChild("cap", URIHandler(client))
|
||||
|
||||
# Handler for everything beneath "/private", an area of the resource
|
||||
|
@ -3,7 +3,6 @@ from past.builtins import long, unicode
|
||||
import pprint
|
||||
import itertools
|
||||
import hashlib
|
||||
import json
|
||||
from twisted.internet import defer
|
||||
from twisted.python.filepath import FilePath
|
||||
from twisted.web.resource import Resource
|
||||
@ -14,7 +13,7 @@ from twisted.web.template import (
|
||||
renderElement,
|
||||
tags,
|
||||
)
|
||||
from allmydata.util import base32, idlib
|
||||
from allmydata.util import base32, idlib, jsonbytes as json
|
||||
from allmydata.web.common import (
|
||||
abbreviate_time,
|
||||
abbreviate_rate,
|
||||
|
@ -1,6 +1,6 @@
|
||||
from future.utils import PY2
|
||||
|
||||
import time, json
|
||||
import time
|
||||
from twisted.python.filepath import FilePath
|
||||
from twisted.web.template import (
|
||||
Element,
|
||||
@ -14,7 +14,7 @@ from allmydata.web.common_py3 import (
|
||||
MultiFormatResource
|
||||
)
|
||||
from allmydata.util.abbreviate import abbreviate_space
|
||||
from allmydata.util import time_format, idlib
|
||||
from allmydata.util import time_format, idlib, jsonbytes as json
|
||||
|
||||
|
||||
def remove_prefix(s, prefix):
|
||||
|
@ -1,3 +1,15 @@
|
||||
"""
|
||||
Ported to Python 3.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from future.utils import PY2
|
||||
if PY2:
|
||||
from future.builtins import filter, map, zip, ascii, chr, hex, input, next, oct, open, pow, round, super, bytes, dict, list, object, range, str, max, min # noqa: F401
|
||||
|
||||
from six import ensure_str
|
||||
|
||||
import re, time, tempfile
|
||||
@ -65,18 +77,24 @@ class TahoeLAFSRequest(Request, object):
|
||||
self.path, argstring = x
|
||||
self.args = parse_qs(argstring, 1)
|
||||
|
||||
if self.method == 'POST':
|
||||
if self.method == b'POST':
|
||||
# We use FieldStorage here because it performs better than
|
||||
# cgi.parse_multipart(self.content, pdict) which is what
|
||||
# twisted.web.http.Request uses.
|
||||
self.fields = FieldStorage(
|
||||
self.content,
|
||||
{
|
||||
name.lower(): value[-1]
|
||||
for (name, value)
|
||||
in self.requestHeaders.getAllRawHeaders()
|
||||
},
|
||||
environ={'REQUEST_METHOD': 'POST'})
|
||||
|
||||
headers = {
|
||||
ensure_str(name.lower()): ensure_str(value[-1])
|
||||
for (name, value)
|
||||
in self.requestHeaders.getAllRawHeaders()
|
||||
}
|
||||
|
||||
if 'content-length' not in headers:
|
||||
# Python 3's cgi module would really, really like us to set Content-Length.
|
||||
self.content.seek(0, 2)
|
||||
headers['content-length'] = str(self.content.tell())
|
||||
self.content.seek(0)
|
||||
|
||||
self.fields = FieldStorage(self.content, headers, environ={'REQUEST_METHOD': 'POST'})
|
||||
self.content.seek(0)
|
||||
|
||||
self._tahoeLAFSSecurityPolicy()
|
||||
@ -128,7 +146,7 @@ def _logFormatter(logDateTime, request):
|
||||
# sure we censor these too.
|
||||
if queryargs.startswith(b"uri="):
|
||||
queryargs = b"uri=[CENSORED]"
|
||||
queryargs = "?" + queryargs
|
||||
queryargs = b"?" + queryargs
|
||||
if path.startswith(b"/uri/"):
|
||||
path = b"/uri/[CENSORED]"
|
||||
elif path.startswith(b"/file/"):
|
||||
|
Loading…
Reference in New Issue
Block a user