mirror of
https://github.com/tahoe-lafs/tahoe-lafs.git
synced 2025-01-18 10:46:24 +00:00
contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
Also remove a couple of vestigial references to figleaf, which is long gone. fixes #1409 (remove contrib/fuse)
This commit is contained in:
parent
0f79973401
commit
4f8e3e5ae8
3
Makefile
3
Makefile
@ -90,9 +90,6 @@ test:
|
||||
|
||||
check: test
|
||||
|
||||
fuse-test: .built
|
||||
$(RUNPP) -d contrib/fuse -p -c runtests.py
|
||||
|
||||
test-coverage: build src/allmydata/_version.py
|
||||
rm -f .coverage
|
||||
$(TAHOE) debug trial --reporter=bwverbose-coverage $(TEST)
|
||||
|
4
NEWS.rst
4
NEWS.rst
@ -5,12 +5,14 @@ User-Visible Changes in Tahoe-LAFS
|
||||
Release 1.9.0 (2011-??-??)
|
||||
--------------------------
|
||||
|
||||
|
||||
- The unmaintained FUSE plugins were removed from the source tree. See
|
||||
docs/frontends/FTP-and-SFTP.rst for how to use sshfs. (`#1409`_)
|
||||
- Nodes now emit "None" for percentiles with higher implied precision
|
||||
than the number of observations can support. Older stats gatherers
|
||||
will throw an exception if they gather stats from a new storage
|
||||
server and it sends a "None" for a percentile. (`#1392`_)
|
||||
|
||||
.. _`#1409`: http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1409
|
||||
|
||||
Release 1.8.2 (2011-01-30)
|
||||
--------------------------
|
||||
|
@ -1,3 +0,0 @@
|
||||
This directory contains code and extensions which are not strictly a part
|
||||
of Tahoe. They may or may not currently work.
|
||||
|
@ -1,90 +0,0 @@
|
||||
|
||||
Welcome to the tahoe fuse interface prototype!
|
||||
|
||||
|
||||
Dependencies:
|
||||
|
||||
In addition to a working tahoe installation, this interface depends
|
||||
on the python-fuse interface. This package is available on Ubuntu
|
||||
systems as "python-fuse". It is only known to work with ubuntu
|
||||
package version "2.5-5build1". The latest ubuntu package (version
|
||||
"1:0.2-pre3-3") appears to not work currently.
|
||||
|
||||
Unfortunately this package appears poorly maintained (notice the wildy
|
||||
different version strings and changing API semantics), so if you know
|
||||
of a good replacement pythonic fuse interface, please let tahoe-dev know
|
||||
about it!
|
||||
|
||||
|
||||
Configuration:
|
||||
|
||||
Currently tahoe-fuse.py uses the same ~/.tahoe/private/root_dir.cap
|
||||
file (which is also the CLI default). This is not configurable yet.
|
||||
Place a directory cap in this file. (Hint: If you can run "tahoe ls"
|
||||
and see a directory listing, this file is properly configured.)
|
||||
|
||||
|
||||
Commandline:
|
||||
|
||||
The usage is "tahoe-fuse.py <mountpoint>". The mount point needs to
|
||||
be an existing directory which should be empty. (If it's not empty
|
||||
the contents will be safe, but unavailable while the tahoe-fuse.py
|
||||
process is mounted there.)
|
||||
|
||||
|
||||
Usage:
|
||||
|
||||
To use the interface, use other programs to poke around the
|
||||
mountpoint. You should be able to see the same contents as you would
|
||||
by using the CLI or WUI for the same directory cap.
|
||||
|
||||
|
||||
Runtime Behavior Notes:
|
||||
|
||||
Read-only:
|
||||
Only reading a tahoe grid is supported, which is reflected in
|
||||
the permission modes. With Tahoe 0.7.0, write access should be easier
|
||||
to implement, but is not yet present.
|
||||
|
||||
In-Memory File Caching:
|
||||
Currently requesting a particular file for read causes the entire file to
|
||||
be retrieved into tahoe-fuse.py memory before the read operation returns!
|
||||
This caching is reused for subsequent reads. Beware large files.
|
||||
When transitioning to a finer-grained fuse api, this caching should be
|
||||
replaced with straight-forward calls to the wapi. In my opinion, the
|
||||
Tahoe node should do all the caching tricks, so that extensions such as
|
||||
tahoe-fuse.py can be simple and thin.
|
||||
|
||||
Backgrounding Behavior:
|
||||
When using the 2.5-5build1 ubuntu package, and no other arguments
|
||||
besides a mountpoint to tahoe-fuse.py, the process should remain in
|
||||
the foreground and print debug information. Other python-fuse
|
||||
versions appear to alter this behavior and may fork the process to
|
||||
the background and obscure the log output. Bonus points to whomever
|
||||
discovers the fate of these poor log messages in this case.
|
||||
|
||||
"Investigative Logging":
|
||||
This prototype is designed to aide in further fuse development, so
|
||||
currently *every* fuse interface call figures out the process from
|
||||
which the file system request originates, then it figures out that
|
||||
processes commandline (this uses the /proc file system). This is handy
|
||||
for interactive inspection of what kinds of behavior invokes which
|
||||
file system operations, but may not work for you. To disable this
|
||||
inspection, edit the source and comment out all of the "@debugcall"
|
||||
[FIXME: double check python ref name] method decorators by inserting a
|
||||
'#' so it looks like "#@debugcall" (without quotes).
|
||||
|
||||
Not-to-spec:
|
||||
The current version was not implemented according to any spec and
|
||||
makes quite a few dubious "guesses" for what data to pass the fuse
|
||||
interface. You may see bizarre values, which may potentialy confuse
|
||||
any processes visiting the files under the mount point.
|
||||
|
||||
Serial, blocking operations:
|
||||
Most fuse operations result in one or more http calls to the WAPI.
|
||||
These are serial and blocking (at least for the tested python-fuse
|
||||
version 2.5-5build1), so access to this file system is quite
|
||||
inefficient.
|
||||
|
||||
|
||||
Good luck!
|
@ -1,476 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
'''
|
||||
Tahoe thin-client fuse module.
|
||||
|
||||
See the accompanying README for configuration/usage details.
|
||||
|
||||
Goals:
|
||||
|
||||
- Delegate to Tahoe webapi as much as possible.
|
||||
- Thin rather than clever. (Even when that means clunky.)
|
||||
|
||||
|
||||
Warts:
|
||||
|
||||
- Reads cache entire file contents, violating the thinness goal. Can we GET spans of files?
|
||||
- Single threaded.
|
||||
|
||||
|
||||
Road-map:
|
||||
1. Add unit tests where possible with little code modification.
|
||||
2. Make unit tests pass for a variety of python-fuse module versions.
|
||||
3. Modify the design to make possible unit test coverage of larger portions of code.
|
||||
|
||||
Wishlist:
|
||||
- Perhaps integrate cli aliases or root_dir.cap.
|
||||
- Research pkg_resources; see if it can replace the try-import-except-import-error pattern.
|
||||
- Switch to logging instead of homebrew logging.
|
||||
'''
|
||||
|
||||
|
||||
#import bindann
|
||||
#bindann.install_exception_handler()
|
||||
|
||||
import sys, stat, os, errno, urllib, time
|
||||
|
||||
try:
|
||||
import simplejson
|
||||
except ImportError, e:
|
||||
raise SystemExit('''\
|
||||
Could not import simplejson, which is bundled with Tahoe. Please
|
||||
update your PYTHONPATH environment variable to include the tahoe
|
||||
"support/lib/python<VERSION>/site-packages" directory.
|
||||
|
||||
If you run this from the Tahoe source directory, use this command:
|
||||
PYTHONPATH="$PYTHONPATH:./support/lib/python%d.%d/site-packages/" python %s
|
||||
''' % (sys.version_info[:2] + (' '.join(sys.argv),)))
|
||||
|
||||
|
||||
try:
|
||||
import fuse
|
||||
except ImportError, e:
|
||||
raise SystemExit('''\
|
||||
Could not import fuse, the pythonic fuse bindings. This dependency
|
||||
of tahoe-fuse.py is *not* bundled with tahoe. Please install it.
|
||||
On debian/ubuntu systems run: sudo apt-get install python-fuse
|
||||
''')
|
||||
|
||||
# FIXME: Check for non-working fuse versions here.
|
||||
# FIXME: Make this work for all common python-fuse versions.
|
||||
|
||||
# FIXME: Currently uses the old, silly path-based (non-stateful) interface:
|
||||
fuse.fuse_python_api = (0, 1) # Use the silly path-based api for now.
|
||||
|
||||
|
||||
### Config:
|
||||
TahoeConfigDir = '~/.tahoe'
|
||||
MagicDevNumber = 42
|
||||
UnknownSize = -1
|
||||
|
||||
|
||||
def main():
|
||||
basedir = os.path.expanduser(TahoeConfigDir)
|
||||
|
||||
for i, arg in enumerate(sys.argv):
|
||||
if arg == '--basedir':
|
||||
try:
|
||||
basedir = sys.argv[i+1]
|
||||
sys.argv[i:i+2] = []
|
||||
except IndexError:
|
||||
sys.argv = [sys.argv[0], '--help']
|
||||
|
||||
|
||||
log_init(basedir)
|
||||
log('Commandline: %r', sys.argv)
|
||||
|
||||
fs = TahoeFS(basedir)
|
||||
fs.main()
|
||||
|
||||
|
||||
### Utilities for debug:
|
||||
_logfile = None # Private to log* functions.
|
||||
|
||||
def log_init(confdir):
|
||||
global _logfile
|
||||
|
||||
logpath = os.path.join(confdir, 'logs', 'tahoe_fuse.log')
|
||||
_logfile = open(logpath, 'a')
|
||||
log('Log opened at: %s\n', time.strftime('%Y-%m-%d %H:%M:%S'))
|
||||
|
||||
|
||||
def log(msg, *args):
|
||||
_logfile.write((msg % args) + '\n')
|
||||
_logfile.flush()
|
||||
|
||||
|
||||
def trace_calls(m):
|
||||
def dbmeth(self, *a, **kw):
|
||||
pid = self.GetContext()['pid']
|
||||
log('[%d %r]\n%s%r%r', pid, get_cmdline(pid), m.__name__, a, kw)
|
||||
try:
|
||||
r = m(self, *a, **kw)
|
||||
if (type(r) is int) and (r < 0):
|
||||
log('-> -%s\n', errno.errorcode[-r],)
|
||||
else:
|
||||
repstr = repr(r)[:256]
|
||||
log('-> %s\n', repstr)
|
||||
return r
|
||||
except:
|
||||
sys.excepthook(*sys.exc_info())
|
||||
|
||||
return dbmeth
|
||||
|
||||
|
||||
def get_cmdline(pid):
|
||||
f = open('/proc/%d/cmdline' % pid, 'r')
|
||||
args = f.read().split('\0')
|
||||
f.close()
|
||||
assert args[-1] == ''
|
||||
return args[:-1]
|
||||
|
||||
|
||||
class SystemError (Exception):
|
||||
def __init__(self, eno):
|
||||
self.eno = eno
|
||||
Exception.__init__(self, errno.errorcode[eno])
|
||||
|
||||
@staticmethod
|
||||
def wrap_returns(meth):
|
||||
def wrapper(*args, **kw):
|
||||
try:
|
||||
return meth(*args, **kw)
|
||||
except SystemError, e:
|
||||
return -e.eno
|
||||
wrapper.__name__ = meth.__name__
|
||||
return wrapper
|
||||
|
||||
|
||||
### Heart of the Matter:
|
||||
class TahoeFS (fuse.Fuse):
|
||||
def __init__(self, confdir):
|
||||
log('Initializing with confdir = %r', confdir)
|
||||
fuse.Fuse.__init__(self)
|
||||
self.confdir = confdir
|
||||
|
||||
self.flags = 0 # FIXME: What goes here?
|
||||
self.multithreaded = 0
|
||||
|
||||
# silly path-based file handles.
|
||||
self.filecontents = {} # {path -> contents}
|
||||
|
||||
self._init_url()
|
||||
self._init_rootdir()
|
||||
|
||||
def _init_url(self):
|
||||
if os.path.exists(os.path.join(self.confdir, 'node.url')):
|
||||
self.url = file(os.path.join(self.confdir, 'node.url'), 'rb').read().strip()
|
||||
if not self.url.endswith('/'):
|
||||
self.url += '/'
|
||||
else:
|
||||
f = open(os.path.join(self.confdir, 'webport'), 'r')
|
||||
contents = f.read()
|
||||
f.close()
|
||||
fields = contents.split(':')
|
||||
proto, port = fields[:2]
|
||||
assert proto == 'tcp'
|
||||
port = int(port)
|
||||
self.url = 'http://localhost:%d' % (port,)
|
||||
|
||||
def _init_rootdir(self):
|
||||
# For now we just use the same default as the CLI:
|
||||
rootdirfn = os.path.join(self.confdir, 'private', 'root_dir.cap')
|
||||
try:
|
||||
f = open(rootdirfn, 'r')
|
||||
cap = f.read().strip()
|
||||
f.close()
|
||||
except EnvironmentError, le:
|
||||
# FIXME: This user-friendly help message may be platform-dependent because it checks the exception description.
|
||||
if le.args[1].find('No such file or directory') != -1:
|
||||
raise SystemExit('%s requires a directory capability in %s, but it was not found.\n' % (sys.argv[0], rootdirfn))
|
||||
else:
|
||||
raise le
|
||||
|
||||
self.rootdir = TahoeDir(self.url, canonicalize_cap(cap))
|
||||
|
||||
def _get_node(self, path):
|
||||
assert path.startswith('/')
|
||||
if path == '/':
|
||||
return self.rootdir.resolve_path([])
|
||||
else:
|
||||
parts = path.split('/')[1:]
|
||||
return self.rootdir.resolve_path(parts)
|
||||
|
||||
def _get_contents(self, path):
|
||||
contents = self.filecontents.get(path)
|
||||
if contents is None:
|
||||
node = self._get_node(path)
|
||||
contents = node.open().read()
|
||||
self.filecontents[path] = contents
|
||||
return contents
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def getattr(self, path):
|
||||
node = self._get_node(path)
|
||||
return node.getattr()
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def getdir(self, path):
|
||||
"""
|
||||
return: [(name, typeflag), ... ]
|
||||
"""
|
||||
node = self._get_node(path)
|
||||
return node.getdir()
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def mythread(self):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def chmod(self, path, mode):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def chown(self, path, uid, gid):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def fsync(self, path, isFsyncFile):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def link(self, target, link):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def mkdir(self, path, mode):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def mknod(self, path, mode, dev_ignored):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def open(self, path, mode):
|
||||
IgnoredFlags = os.O_RDONLY | os.O_NONBLOCK | os.O_SYNC | os.O_LARGEFILE
|
||||
# Note: IgnoredFlags are all ignored!
|
||||
for fname in dir(os):
|
||||
if fname.startswith('O_'):
|
||||
flag = getattr(os, fname)
|
||||
if flag & IgnoredFlags:
|
||||
continue
|
||||
elif mode & flag:
|
||||
log('Flag not supported: %s', fname)
|
||||
raise SystemError(errno.ENOSYS)
|
||||
|
||||
self._get_contents(path)
|
||||
return 0
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def read(self, path, length, offset):
|
||||
return self._get_contents(path)[offset:length]
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def release(self, path):
|
||||
del self.filecontents[path]
|
||||
return 0
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def readlink(self, path):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def rename(self, oldpath, newpath):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def rmdir(self, path):
|
||||
return -errno.ENOSYS
|
||||
|
||||
#@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def statfs(self):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def symlink ( self, targetPath, linkPath ):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def truncate(self, path, size):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def unlink(self, path):
|
||||
return -errno.ENOSYS
|
||||
|
||||
@trace_calls
|
||||
@SystemError.wrap_returns
|
||||
def utime(self, path, times):
|
||||
return -errno.ENOSYS
|
||||
|
||||
|
||||
class TahoeNode (object):
|
||||
NextInode = 0
|
||||
|
||||
@staticmethod
|
||||
def make(baseurl, uri):
|
||||
typefield = uri.split(':', 2)[1]
|
||||
# FIXME: is this check correct?
|
||||
if uri.find('URI:DIR2') != -1:
|
||||
return TahoeDir(baseurl, uri)
|
||||
else:
|
||||
return TahoeFile(baseurl, uri)
|
||||
|
||||
def __init__(self, baseurl, uri):
|
||||
if not baseurl.endswith('/'):
|
||||
baseurl += '/'
|
||||
self.burl = baseurl
|
||||
self.uri = uri
|
||||
self.fullurl = '%suri/%s' % (self.burl, self.uri)
|
||||
self.inode = TahoeNode.NextInode
|
||||
TahoeNode.NextInode += 1
|
||||
|
||||
def getattr(self):
|
||||
"""
|
||||
- st_mode (protection bits)
|
||||
- st_ino (inode number)
|
||||
- st_dev (device)
|
||||
- st_nlink (number of hard links)
|
||||
- st_uid (user ID of owner)
|
||||
- st_gid (group ID of owner)
|
||||
- st_size (size of file, in bytes)
|
||||
- st_atime (time of most recent access)
|
||||
- st_mtime (time of most recent content modification)
|
||||
- st_ctime (platform dependent; time of most recent metadata change on Unix,
|
||||
or the time of creation on Windows).
|
||||
"""
|
||||
# FIXME: Return metadata that isn't completely fabricated.
|
||||
return (self.get_mode(),
|
||||
self.inode,
|
||||
MagicDevNumber,
|
||||
self.get_linkcount(),
|
||||
os.getuid(),
|
||||
os.getgid(),
|
||||
self.get_size(),
|
||||
0,
|
||||
0,
|
||||
0)
|
||||
|
||||
def get_metadata(self):
|
||||
f = self.open('?t=json')
|
||||
json = f.read()
|
||||
f.close()
|
||||
return simplejson.loads(json)
|
||||
|
||||
def open(self, postfix=''):
|
||||
url = self.fullurl + postfix
|
||||
log('*** Fetching: %r', url)
|
||||
return urllib.urlopen(url)
|
||||
|
||||
|
||||
class TahoeFile (TahoeNode):
|
||||
def __init__(self, baseurl, uri):
|
||||
#assert uri.split(':', 2)[1] in ('CHK', 'LIT'), `uri` # fails as of 0.7.0
|
||||
TahoeNode.__init__(self, baseurl, uri)
|
||||
|
||||
# nonfuse:
|
||||
def get_mode(self):
|
||||
return stat.S_IFREG | 0400 # Read only regular file.
|
||||
|
||||
def get_linkcount(self):
|
||||
return 1
|
||||
|
||||
def get_size(self):
|
||||
rawsize = self.get_metadata()[1]['size']
|
||||
if type(rawsize) is not int: # FIXME: What about sizes which do not fit in python int?
|
||||
assert rawsize == u'?', `rawsize`
|
||||
return UnknownSize
|
||||
else:
|
||||
return rawsize
|
||||
|
||||
def resolve_path(self, path):
|
||||
assert path == []
|
||||
return self
|
||||
|
||||
|
||||
class TahoeDir (TahoeNode):
|
||||
def __init__(self, baseurl, uri):
|
||||
TahoeNode.__init__(self, baseurl, uri)
|
||||
|
||||
self.mode = stat.S_IFDIR | 0500 # Read only directory.
|
||||
|
||||
# FUSE:
|
||||
def getdir(self):
|
||||
d = [('.', self.get_mode()), ('..', self.get_mode())]
|
||||
for name, child in self.get_children().items():
|
||||
if name: # Just ignore this crazy case!
|
||||
d.append((name, child.get_mode()))
|
||||
return d
|
||||
|
||||
# nonfuse:
|
||||
def get_mode(self):
|
||||
return stat.S_IFDIR | 0500 # Read only directory.
|
||||
|
||||
def get_linkcount(self):
|
||||
return len(self.getdir())
|
||||
|
||||
def get_size(self):
|
||||
return 2 ** 12 # FIXME: What do we return here? len(self.get_metadata())
|
||||
|
||||
def resolve_path(self, path):
|
||||
assert type(path) is list
|
||||
|
||||
if path:
|
||||
head = path[0]
|
||||
child = self.get_child(head)
|
||||
return child.resolve_path(path[1:])
|
||||
else:
|
||||
return self
|
||||
|
||||
def get_child(self, name):
|
||||
c = self.get_children()
|
||||
return c[name]
|
||||
|
||||
def get_children(self):
|
||||
flag, md = self.get_metadata()
|
||||
assert flag == 'dirnode'
|
||||
|
||||
c = {}
|
||||
for name, (childflag, childmd) in md['children'].items():
|
||||
if childflag == 'dirnode':
|
||||
cls = TahoeDir
|
||||
else:
|
||||
cls = TahoeFile
|
||||
|
||||
c[str(name)] = cls(self.burl, childmd['ro_uri'])
|
||||
return c
|
||||
|
||||
|
||||
def canonicalize_cap(cap):
|
||||
cap = urllib.unquote(cap)
|
||||
i = cap.find('URI:')
|
||||
assert i != -1, 'A cap must contain "URI:...", but this does not: ' + cap
|
||||
return cap[i:]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
@ -1,36 +0,0 @@
|
||||
This announcement is archived in the tahoe-dev mailing list archive:
|
||||
|
||||
http://allmydata.org/pipermail/tahoe-dev/2008-March/000465.html
|
||||
|
||||
[tahoe-dev] Another FUSE interface
|
||||
Armin Rigo arigo at tunes.org
|
||||
Sat Mar 29 04:35:36 PDT 2008
|
||||
|
||||
* Previous message: [tahoe-dev] announcing allmydata.org "Tahoe", v1.0
|
||||
* Next message: [tahoe-dev] convergent encryption reconsidered -- salting and key-strengthening
|
||||
* Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
|
||||
|
||||
Hi all,
|
||||
|
||||
I implemented for fun another Tahoe-to-FUSE interface using my own set
|
||||
of FUSE bindings. If you are interested, you can check out the
|
||||
following subversion directory:
|
||||
|
||||
http://codespeak.net/svn/user/arigo/hack/pyfuse
|
||||
|
||||
tahoe.py is a 100-lines, half-an-hour-job interface to Tahoe, limited to
|
||||
read-only at the moment. The rest of the directory contains PyFuse, and
|
||||
many other small usage examples. PyFuse is a pure Python FUSE daemon
|
||||
(no messy linking issues, no dependencies).
|
||||
|
||||
|
||||
A bientot,
|
||||
|
||||
Armin Rigo
|
||||
|
||||
* Previous message: [tahoe-dev] announcing allmydata.org "Tahoe", v1.0
|
||||
* Next message: [tahoe-dev] convergent encryption reconsidered -- salting and key-strengthening
|
||||
* Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
|
||||
|
||||
More information about the tahoe-dev mailing list
|
||||
|
@ -1,84 +0,0 @@
|
||||
from UserDict import DictMixin
|
||||
|
||||
|
||||
DELETED = object()
|
||||
|
||||
|
||||
class OrderedDict(DictMixin):
|
||||
|
||||
def __init__(self, *args, **kwds):
|
||||
self.clear()
|
||||
self.update(*args, **kwds)
|
||||
|
||||
def clear(self):
|
||||
self._keys = []
|
||||
self._content = {} # {key: (index, value)}
|
||||
self._deleted = 0
|
||||
|
||||
def copy(self):
|
||||
return OrderedDict(self)
|
||||
|
||||
def __iter__(self):
|
||||
for key in self._keys:
|
||||
if key is not DELETED:
|
||||
yield key
|
||||
|
||||
def keys(self):
|
||||
return [key for key in self._keys if key is not DELETED]
|
||||
|
||||
def popitem(self):
|
||||
while 1:
|
||||
try:
|
||||
k = self._keys.pop()
|
||||
except IndexError:
|
||||
raise KeyError, 'OrderedDict is empty'
|
||||
if k is not DELETED:
|
||||
return k, self._content.pop(k)[1]
|
||||
|
||||
def __getitem__(self, key):
|
||||
index, value = self._content[key]
|
||||
return value
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
try:
|
||||
index, oldvalue = self._content[key]
|
||||
except KeyError:
|
||||
index = len(self._keys)
|
||||
self._keys.append(key)
|
||||
self._content[key] = index, value
|
||||
|
||||
def __delitem__(self, key):
|
||||
index, oldvalue = self._content.pop(key)
|
||||
self._keys[index] = DELETED
|
||||
if self._deleted <= len(self._content):
|
||||
self._deleted += 1
|
||||
else:
|
||||
# compress
|
||||
newkeys = []
|
||||
for k in self._keys:
|
||||
if k is not DELETED:
|
||||
i, value = self._content[k]
|
||||
self._content[k] = len(newkeys), value
|
||||
newkeys.append(k)
|
||||
self._keys = newkeys
|
||||
self._deleted = 0
|
||||
|
||||
def __len__(self):
|
||||
return len(self._content)
|
||||
|
||||
def __repr__(self):
|
||||
res = ['%r: %r' % (key, self._content[key][1]) for key in self]
|
||||
return 'OrderedDict(%s)' % (', '.join(res),)
|
||||
|
||||
def __cmp__(self, other):
|
||||
if not isinstance(other, OrderedDict):
|
||||
return NotImplemented
|
||||
keys = self.keys()
|
||||
r = cmp(keys, other.keys())
|
||||
if r:
|
||||
return r
|
||||
for k in keys:
|
||||
r = cmp(self[k], other[k])
|
||||
if r:
|
||||
return r
|
||||
return 0
|
@ -1,281 +0,0 @@
|
||||
import os, stat, py, select
|
||||
import inspect
|
||||
from objectfs import ObjectFs
|
||||
|
||||
|
||||
BLOCKSIZE = 8192
|
||||
|
||||
|
||||
def remote_runner(BLOCKSIZE):
|
||||
import sys, select, os, struct
|
||||
stream = None
|
||||
while True:
|
||||
while stream is not None:
|
||||
iwtd, owtd, ewtd = select.select([0], [1], [])
|
||||
if iwtd:
|
||||
break
|
||||
pos = stream.tell()
|
||||
data = stream.read(BLOCKSIZE)
|
||||
res = ('R', path, pos, len(data))
|
||||
sys.stdout.write('%r\n%s' % (res, data))
|
||||
if len(data) < BLOCKSIZE:
|
||||
stream = None
|
||||
|
||||
stream = None
|
||||
msg = eval(sys.stdin.readline())
|
||||
if msg[0] == 'L':
|
||||
path = msg[1]
|
||||
names = os.listdir(path)
|
||||
res = []
|
||||
for name in names:
|
||||
try:
|
||||
st = os.stat(os.path.join(path, name))
|
||||
except OSError:
|
||||
continue
|
||||
res.append((name, st.st_mode, st.st_size))
|
||||
res = msg + (res,)
|
||||
sys.stdout.write('%s\n' % (res,))
|
||||
elif msg[0] == 'R':
|
||||
path, pos = msg[1:]
|
||||
f = open(path, 'rb')
|
||||
f.seek(pos)
|
||||
data = f.read(BLOCKSIZE)
|
||||
res = msg + (len(data),)
|
||||
sys.stdout.write('%r\n%s' % (res, data))
|
||||
elif msg[0] == 'S':
|
||||
path, pos = msg[1:]
|
||||
stream = open(path, 'rb')
|
||||
stream.seek(pos)
|
||||
#elif msg[0] == 'C':
|
||||
# stream = None
|
||||
|
||||
|
||||
class CacheFs(ObjectFs):
|
||||
MOUNT_OPTIONS = {'max_read': BLOCKSIZE}
|
||||
|
||||
def __init__(self, localdir, remotehost, remotedir):
|
||||
src = inspect.getsource(remote_runner)
|
||||
src += '\n\nremote_runner(%d)\n' % BLOCKSIZE
|
||||
|
||||
remotecmd = 'python -u -c "exec input()"'
|
||||
cmdline = [remotehost, remotecmd]
|
||||
# XXX Unix style quoting
|
||||
for i in range(len(cmdline)):
|
||||
cmdline[i] = "'" + cmdline[i].replace("'", "'\\''") + "'"
|
||||
cmd = 'ssh -C'
|
||||
cmdline.insert(0, cmd)
|
||||
|
||||
child_in, child_out = os.popen2(' '.join(cmdline), bufsize=0)
|
||||
child_in.write('%r\n' % (src,))
|
||||
|
||||
control = Controller(child_in, child_out)
|
||||
ObjectFs.__init__(self, CacheDir(localdir, remotedir, control))
|
||||
|
||||
|
||||
class Controller:
|
||||
def __init__(self, child_in, child_out):
|
||||
self.child_in = child_in
|
||||
self.child_out = child_out
|
||||
self.cache = {}
|
||||
self.streaming = None
|
||||
|
||||
def next_answer(self):
|
||||
answer = eval(self.child_out.readline())
|
||||
#print 'A', answer
|
||||
if answer[0] == 'R':
|
||||
remotefn, pos, length = answer[1:]
|
||||
data = self.child_out.read(length)
|
||||
self.cache[remotefn, pos] = data
|
||||
return answer
|
||||
|
||||
def wait_answer(self, query):
|
||||
self.streaming = None
|
||||
#print 'Q', query
|
||||
self.child_in.write('%r\n' % (query,))
|
||||
while True:
|
||||
answer = self.next_answer()
|
||||
if answer[:len(query)] == query:
|
||||
return answer[len(query):]
|
||||
|
||||
def listdir(self, remotedir):
|
||||
query = ('L', remotedir)
|
||||
res, = self.wait_answer(query)
|
||||
return res
|
||||
|
||||
def wait_for_block(self, remotefn, pos):
|
||||
key = remotefn, pos
|
||||
while key not in self.cache:
|
||||
self.next_answer()
|
||||
return self.cache[key]
|
||||
|
||||
def peek_for_block(self, remotefn, pos):
|
||||
key = remotefn, pos
|
||||
while key not in self.cache:
|
||||
iwtd, owtd, ewtd = select.select([self.child_out], [], [], 0)
|
||||
if not iwtd:
|
||||
return None
|
||||
self.next_answer()
|
||||
return self.cache[key]
|
||||
|
||||
def cached_block(self, remotefn, pos):
|
||||
key = remotefn, pos
|
||||
return self.cache.get(key)
|
||||
|
||||
def start_streaming(self, remotefn, pos):
|
||||
if remotefn != self.streaming:
|
||||
while (remotefn, pos) in self.cache:
|
||||
pos += BLOCKSIZE
|
||||
query = ('S', remotefn, pos)
|
||||
#print 'Q', query
|
||||
self.child_in.write('%r\n' % (query,))
|
||||
self.streaming = remotefn
|
||||
|
||||
def read_blocks(self, remotefn, poslist):
|
||||
lst = ['%r\n' % (('R', remotefn, pos),)
|
||||
for pos in poslist if (remotefn, pos) not in self.cache]
|
||||
if lst:
|
||||
self.streaming = None
|
||||
#print 'Q', '+ '.join(lst)
|
||||
self.child_in.write(''.join(lst))
|
||||
|
||||
def clear_cache(self, remotefn):
|
||||
for key in self.cache.keys():
|
||||
if key[0] == remotefn:
|
||||
del self.cache[key]
|
||||
|
||||
|
||||
class CacheDir:
|
||||
def __init__(self, localdir, remotedir, control, size=0):
|
||||
self.localdir = localdir
|
||||
self.remotedir = remotedir
|
||||
self.control = control
|
||||
self.entries = None
|
||||
def listdir(self):
|
||||
if self.entries is None:
|
||||
self.entries = []
|
||||
for name, st_mode, st_size in self.control.listdir(self.remotedir):
|
||||
if stat.S_ISDIR(st_mode):
|
||||
cls = CacheDir
|
||||
else:
|
||||
cls = CacheFile
|
||||
obj = cls(os.path.join(self.localdir, name),
|
||||
os.path.join(self.remotedir, name),
|
||||
self.control,
|
||||
st_size)
|
||||
self.entries.append((name, obj))
|
||||
return self.entries
|
||||
|
||||
class CacheFile:
|
||||
def __init__(self, localfn, remotefn, control, size):
|
||||
self.localfn = localfn
|
||||
self.remotefn = remotefn
|
||||
self.control = control
|
||||
self.st_size = size
|
||||
|
||||
def size(self):
|
||||
return self.st_size
|
||||
|
||||
def read(self):
|
||||
try:
|
||||
st = os.stat(self.localfn)
|
||||
except OSError:
|
||||
pass
|
||||
else:
|
||||
if st.st_size == self.st_size: # fully cached
|
||||
return open(self.localfn, 'rb')
|
||||
os.unlink(self.localfn)
|
||||
lpath = py.path.local(self.partial())
|
||||
lpath.ensure(file=1)
|
||||
f = open(self.partial(), 'r+b')
|
||||
return DumpFile(self, f)
|
||||
|
||||
def partial(self):
|
||||
return self.localfn + '.partial~'
|
||||
|
||||
def complete(self):
|
||||
try:
|
||||
os.rename(self.partial(), self.localfn)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
|
||||
class DumpFile:
|
||||
|
||||
def __init__(self, cf, f):
|
||||
self.cf = cf
|
||||
self.f = f
|
||||
self.pos = 0
|
||||
|
||||
def seek(self, npos):
|
||||
self.pos = npos
|
||||
|
||||
def read(self, count):
|
||||
control = self.cf.control
|
||||
self.f.seek(self.pos)
|
||||
buffer = self.f.read(count)
|
||||
self.pos += len(buffer)
|
||||
count -= len(buffer)
|
||||
|
||||
self.f.seek(0, 2)
|
||||
curend = self.f.tell()
|
||||
|
||||
if count > 0:
|
||||
|
||||
while self.pos > curend:
|
||||
curend &= -BLOCKSIZE
|
||||
data = control.peek_for_block(self.cf.remotefn, curend)
|
||||
if data is None:
|
||||
break
|
||||
self.f.seek(curend)
|
||||
self.f.write(data)
|
||||
curend += len(data)
|
||||
if len(data) < BLOCKSIZE:
|
||||
break
|
||||
|
||||
start = max(self.pos, curend) & (-BLOCKSIZE)
|
||||
end = (self.pos + count + BLOCKSIZE-1) & (-BLOCKSIZE)
|
||||
poslist = range(start, end, BLOCKSIZE)
|
||||
|
||||
if self.pos <= curend:
|
||||
control.start_streaming(self.cf.remotefn, start)
|
||||
self.f.seek(start)
|
||||
for p in poslist:
|
||||
data = control.wait_for_block(self.cf.remotefn, p)
|
||||
assert self.f.tell() == p
|
||||
self.f.write(data)
|
||||
if len(data) < BLOCKSIZE:
|
||||
break
|
||||
|
||||
curend = self.f.tell()
|
||||
while curend < self.cf.st_size:
|
||||
curend &= -BLOCKSIZE
|
||||
data = control.cached_block(self.cf.remotefn, curend)
|
||||
if data is None:
|
||||
break
|
||||
assert self.f.tell() == curend
|
||||
self.f.write(data)
|
||||
curend += len(data)
|
||||
else:
|
||||
self.cf.complete()
|
||||
control.clear_cache(self.cf.remotefn)
|
||||
|
||||
self.f.seek(self.pos)
|
||||
buffer += self.f.read(count)
|
||||
|
||||
else:
|
||||
control.read_blocks(self.cf.remotefn, poslist)
|
||||
result = []
|
||||
for p in poslist:
|
||||
data = control.wait_for_block(self.cf.remotefn, p)
|
||||
result.append(data)
|
||||
if len(data) < BLOCKSIZE:
|
||||
break
|
||||
data = ''.join(result)
|
||||
buffer += data[self.pos-start:self.pos-start+count]
|
||||
|
||||
else:
|
||||
if self.pos + 60000 > curend:
|
||||
curend &= -BLOCKSIZE
|
||||
control.start_streaming(self.cf.remotefn, curend)
|
||||
|
||||
return buffer
|
@ -1,71 +0,0 @@
|
||||
import sys, os, Queue, atexit
|
||||
|
||||
dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
dir = os.path.join(dir, 'pypeers')
|
||||
if dir not in sys.path:
|
||||
sys.path.append(dir)
|
||||
del dir
|
||||
|
||||
from greensock import *
|
||||
import threadchannel
|
||||
|
||||
|
||||
def _read_from_kernel(handler):
|
||||
while True:
|
||||
msg = read(handler.fd, handler.MAX_READ)
|
||||
if not msg:
|
||||
print >> sys.stderr, "out-kernel connexion closed"
|
||||
break
|
||||
autogreenlet(handler.handle_message, msg)
|
||||
|
||||
def add_handler(handler):
|
||||
autogreenlet(_read_from_kernel, handler)
|
||||
atexit.register(handler.close)
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
THREAD_QUEUE = None
|
||||
|
||||
def thread_runner(n):
|
||||
while True:
|
||||
#print 'thread runner %d waiting' % n
|
||||
operation, answer = THREAD_QUEUE.get()
|
||||
#print 'thread_runner %d: %r' % (n, operation)
|
||||
try:
|
||||
res = True, operation()
|
||||
except Exception:
|
||||
res = False, sys.exc_info()
|
||||
#print 'thread_runner %d: got %d bytes' % (n, len(res or ''))
|
||||
answer.send(res)
|
||||
|
||||
|
||||
def start_bkgnd_thread():
|
||||
global THREAD_QUEUE, THREAD_LOCK
|
||||
import thread
|
||||
threadchannel.startup()
|
||||
THREAD_LOCK = thread.allocate_lock()
|
||||
THREAD_QUEUE = Queue.Queue()
|
||||
for i in range(4):
|
||||
thread.start_new_thread(thread_runner, (i,))
|
||||
|
||||
def wget(*args, **kwds):
|
||||
from wget import wget
|
||||
|
||||
def operation():
|
||||
kwds['unlock'] = THREAD_LOCK
|
||||
THREAD_LOCK.acquire()
|
||||
try:
|
||||
return wget(*args, **kwds)
|
||||
finally:
|
||||
THREAD_LOCK.release()
|
||||
|
||||
if THREAD_QUEUE is None:
|
||||
start_bkgnd_thread()
|
||||
answer = threadchannel.ThreadChannel()
|
||||
THREAD_QUEUE.put((operation, answer))
|
||||
ok, res = answer.receive()
|
||||
if not ok:
|
||||
typ, value, tb = res
|
||||
raise typ, value, tb
|
||||
#print 'wget returns %d bytes' % (len(res or ''),)
|
||||
return res
|
@ -1,387 +0,0 @@
|
||||
from kernel import *
|
||||
import os, errno, sys
|
||||
|
||||
def fuse_mount(mountpoint, opts=None):
|
||||
if not isinstance(mountpoint, str):
|
||||
raise TypeError
|
||||
if opts is not None and not isinstance(opts, str):
|
||||
raise TypeError
|
||||
import dl
|
||||
try:
|
||||
fuse = dl.open('libfuse.so')
|
||||
except dl.error:
|
||||
fuse = dl.open('libfuse.so.2')
|
||||
if fuse.sym('fuse_mount_compat22'):
|
||||
fnname = 'fuse_mount_compat22'
|
||||
else:
|
||||
fnname = 'fuse_mount' # older versions of libfuse.so
|
||||
return fuse.call(fnname, mountpoint, opts)
|
||||
|
||||
class Handler(object):
|
||||
__system = os.system
|
||||
mountpoint = fd = None
|
||||
__in_header_size = fuse_in_header.calcsize()
|
||||
__out_header_size = fuse_out_header.calcsize()
|
||||
MAX_READ = FUSE_MAX_IN
|
||||
|
||||
def __init__(self, mountpoint, filesystem, logfile='STDERR', **opts1):
|
||||
opts = getattr(filesystem, 'MOUNT_OPTIONS', {}).copy()
|
||||
opts.update(opts1)
|
||||
if opts:
|
||||
opts = opts.items()
|
||||
opts.sort()
|
||||
opts = ' '.join(['%s=%s' % item for item in opts])
|
||||
else:
|
||||
opts = None
|
||||
fd = fuse_mount(mountpoint, opts)
|
||||
if fd < 0:
|
||||
raise IOError("mount failed")
|
||||
self.fd = fd
|
||||
if logfile == 'STDERR':
|
||||
logfile = sys.stderr
|
||||
self.logfile = logfile
|
||||
if self.logfile:
|
||||
print >> self.logfile, '* mounted at', mountpoint
|
||||
self.mountpoint = mountpoint
|
||||
self.filesystem = filesystem
|
||||
self.handles = {}
|
||||
self.nexth = 1
|
||||
|
||||
def __del__(self):
|
||||
if self.fd is not None:
|
||||
os.close(self.fd)
|
||||
self.fd = None
|
||||
if self.mountpoint:
|
||||
cmd = "fusermount -u '%s'" % self.mountpoint.replace("'", r"'\''")
|
||||
self.mountpoint = None
|
||||
if self.logfile:
|
||||
print >> self.logfile, '*', cmd
|
||||
self.__system(cmd)
|
||||
|
||||
close = __del__
|
||||
|
||||
def loop_forever(self):
|
||||
while True:
|
||||
try:
|
||||
msg = os.read(self.fd, FUSE_MAX_IN)
|
||||
except OSError, ose:
|
||||
if ose.errno == errno.ENODEV:
|
||||
# on hardy, at least, this is what happens upon fusermount -u
|
||||
#raise EOFError("out-kernel connection closed")
|
||||
return
|
||||
if not msg:
|
||||
#raise EOFError("out-kernel connection closed")
|
||||
return
|
||||
self.handle_message(msg)
|
||||
|
||||
def handle_message(self, msg):
|
||||
headersize = self.__in_header_size
|
||||
req = fuse_in_header(msg[:headersize])
|
||||
assert req.len == len(msg)
|
||||
name = req.opcode
|
||||
try:
|
||||
try:
|
||||
name = fuse_opcode2name[req.opcode]
|
||||
meth = getattr(self, name)
|
||||
except (IndexError, AttributeError):
|
||||
raise NotImplementedError
|
||||
#if self.logfile:
|
||||
# print >> self.logfile, '%s(%d)' % (name, req.nodeid)
|
||||
reply = meth(req, msg[headersize:])
|
||||
#if self.logfile:
|
||||
# print >> self.logfile, ' >>', repr(reply)
|
||||
except NotImplementedError:
|
||||
if self.logfile:
|
||||
print >> self.logfile, '%s: not implemented' % (name,)
|
||||
self.send_reply(req, err=errno.ENOSYS)
|
||||
except EnvironmentError, e:
|
||||
if self.logfile:
|
||||
print >> self.logfile, '%s: %s' % (name, e)
|
||||
self.send_reply(req, err = e.errno or errno.ESTALE)
|
||||
except NoReply:
|
||||
pass
|
||||
else:
|
||||
self.send_reply(req, reply)
|
||||
|
||||
def send_reply(self, req, reply=None, err=0):
|
||||
assert 0 <= err < 1000
|
||||
if reply is None:
|
||||
reply = ''
|
||||
elif not isinstance(reply, str):
|
||||
reply = reply.pack()
|
||||
f = fuse_out_header(unique = req.unique,
|
||||
error = -err,
|
||||
len = self.__out_header_size + len(reply))
|
||||
data = f.pack() + reply
|
||||
while data:
|
||||
count = os.write(self.fd, data)
|
||||
if not count:
|
||||
raise EOFError("in-kernel connection closed")
|
||||
data = data[count:]
|
||||
|
||||
def notsupp_or_ro(self):
|
||||
if hasattr(self.filesystem, "modified"):
|
||||
raise IOError(errno.ENOSYS, "not supported")
|
||||
else:
|
||||
raise IOError(errno.EROFS, "read-only file system")
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
def FUSE_INIT(self, req, msg):
|
||||
msg = fuse_init_in_out(msg[:8])
|
||||
if self.logfile:
|
||||
print >> self.logfile, 'INIT: %d.%d' % (msg.major, msg.minor)
|
||||
return fuse_init_in_out(major = FUSE_KERNEL_VERSION,
|
||||
minor = FUSE_KERNEL_MINOR_VERSION)
|
||||
|
||||
def FUSE_GETATTR(self, req, msg):
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
attr, valid = self.filesystem.getattr(node)
|
||||
return fuse_attr_out(attr_valid = valid,
|
||||
attr = attr)
|
||||
|
||||
def FUSE_SETATTR(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'setattr'):
|
||||
self.notsupp_or_ro()
|
||||
msg = fuse_setattr_in(msg)
|
||||
if msg.valid & FATTR_MODE: mode = msg.attr.mode & 0777
|
||||
else: mode = None
|
||||
if msg.valid & FATTR_UID: uid = msg.attr.uid
|
||||
else: uid = None
|
||||
if msg.valid & FATTR_GID: gid = msg.attr.gid
|
||||
else: gid = None
|
||||
if msg.valid & FATTR_SIZE: size = msg.attr.size
|
||||
else: size = None
|
||||
if msg.valid & FATTR_ATIME: atime = msg.attr.atime
|
||||
else: atime = None
|
||||
if msg.valid & FATTR_MTIME: mtime = msg.attr.mtime
|
||||
else: mtime = None
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
self.filesystem.setattr(node, mode, uid, gid,
|
||||
size, atime, mtime)
|
||||
attr, valid = self.filesystem.getattr(node)
|
||||
return fuse_attr_out(attr_valid = valid,
|
||||
attr = attr)
|
||||
|
||||
def FUSE_RELEASE(self, req, msg):
|
||||
msg = fuse_release_in(msg, truncate=True)
|
||||
try:
|
||||
del self.handles[msg.fh]
|
||||
except KeyError:
|
||||
raise IOError(errno.EBADF, msg.fh)
|
||||
FUSE_RELEASEDIR = FUSE_RELEASE
|
||||
|
||||
def FUSE_OPENDIR(self, req, msg):
|
||||
#msg = fuse_open_in(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
attr, valid = self.filesystem.getattr(node)
|
||||
if mode2type(attr.mode) != TYPE_DIR:
|
||||
raise IOError(errno.ENOTDIR, node)
|
||||
fh = self.nexth
|
||||
self.nexth += 1
|
||||
self.handles[fh] = True, '', node
|
||||
return fuse_open_out(fh = fh)
|
||||
|
||||
def FUSE_READDIR(self, req, msg):
|
||||
msg = fuse_read_in(msg)
|
||||
try:
|
||||
isdir, data, node = self.handles[msg.fh]
|
||||
if not isdir:
|
||||
raise KeyError # not a dir handle
|
||||
except KeyError:
|
||||
raise IOError(errno.EBADF, msg.fh)
|
||||
if msg.offset == 0:
|
||||
# start or rewind
|
||||
d_entries = []
|
||||
off = 0
|
||||
for name, type in self.filesystem.listdir(node):
|
||||
off += fuse_dirent.calcsize(len(name))
|
||||
d_entry = fuse_dirent(ino = INVALID_INO,
|
||||
off = off,
|
||||
type = type,
|
||||
name = name)
|
||||
d_entries.append(d_entry)
|
||||
data = ''.join([d.pack() for d in d_entries])
|
||||
self.handles[msg.fh] = True, data, node
|
||||
return data[msg.offset:msg.offset+msg.size]
|
||||
|
||||
def replyentry(self, (subnodeid, valid1)):
|
||||
subnode = self.filesystem.getnode(subnodeid)
|
||||
attr, valid2 = self.filesystem.getattr(subnode)
|
||||
return fuse_entry_out(nodeid = subnodeid,
|
||||
entry_valid = valid1,
|
||||
attr_valid = valid2,
|
||||
attr = attr)
|
||||
|
||||
def FUSE_LOOKUP(self, req, msg):
|
||||
filename = c2pystr(msg)
|
||||
dirnode = self.filesystem.getnode(req.nodeid)
|
||||
return self.replyentry(self.filesystem.lookup(dirnode, filename))
|
||||
|
||||
def FUSE_OPEN(self, req, msg, mask=os.O_RDONLY|os.O_WRONLY|os.O_RDWR):
|
||||
msg = fuse_open_in(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
attr, valid = self.filesystem.getattr(node)
|
||||
if mode2type(attr.mode) != TYPE_REG:
|
||||
raise IOError(errno.EPERM, node)
|
||||
f = self.filesystem.open(node, msg.flags & mask)
|
||||
if isinstance(f, tuple):
|
||||
f, open_flags = f
|
||||
else:
|
||||
open_flags = 0
|
||||
fh = self.nexth
|
||||
self.nexth += 1
|
||||
self.handles[fh] = False, f, node
|
||||
return fuse_open_out(fh = fh, open_flags = open_flags)
|
||||
|
||||
def FUSE_READ(self, req, msg):
|
||||
msg = fuse_read_in(msg)
|
||||
try:
|
||||
isdir, f, node = self.handles[msg.fh]
|
||||
if isdir:
|
||||
raise KeyError
|
||||
except KeyError:
|
||||
raise IOError(errno.EBADF, msg.fh)
|
||||
f.seek(msg.offset)
|
||||
return f.read(msg.size)
|
||||
|
||||
def FUSE_WRITE(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'modified'):
|
||||
raise IOError(errno.EROFS, "read-only file system")
|
||||
msg, data = fuse_write_in.from_head(msg)
|
||||
try:
|
||||
isdir, f, node = self.handles[msg.fh]
|
||||
if isdir:
|
||||
raise KeyError
|
||||
except KeyError:
|
||||
raise IOError(errno.EBADF, msg.fh)
|
||||
f.seek(msg.offset)
|
||||
f.write(data)
|
||||
self.filesystem.modified(node)
|
||||
return fuse_write_out(size = len(data))
|
||||
|
||||
def FUSE_MKNOD(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'mknod'):
|
||||
self.notsupp_or_ro()
|
||||
msg, filename = fuse_mknod_in.from_param(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
return self.replyentry(self.filesystem.mknod(node, filename, msg.mode))
|
||||
|
||||
def FUSE_MKDIR(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'mkdir'):
|
||||
self.notsupp_or_ro()
|
||||
msg, filename = fuse_mkdir_in.from_param(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
return self.replyentry(self.filesystem.mkdir(node, filename, msg.mode))
|
||||
|
||||
def FUSE_SYMLINK(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'symlink'):
|
||||
self.notsupp_or_ro()
|
||||
linkname, target = c2pystr2(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
return self.replyentry(self.filesystem.symlink(node, linkname, target))
|
||||
|
||||
#def FUSE_LINK(self, req, msg):
|
||||
# ...
|
||||
|
||||
def FUSE_UNLINK(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'unlink'):
|
||||
self.notsupp_or_ro()
|
||||
filename = c2pystr(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
self.filesystem.unlink(node, filename)
|
||||
|
||||
def FUSE_RMDIR(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'rmdir'):
|
||||
self.notsupp_or_ro()
|
||||
dirname = c2pystr(msg)
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
self.filesystem.rmdir(node, dirname)
|
||||
|
||||
def FUSE_FORGET(self, req, msg):
|
||||
if hasattr(self.filesystem, 'forget'):
|
||||
self.filesystem.forget(req.nodeid)
|
||||
raise NoReply
|
||||
|
||||
def FUSE_READLINK(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'readlink'):
|
||||
raise IOError(errno.ENOSYS, "readlink not supported")
|
||||
node = self.filesystem.getnode(req.nodeid)
|
||||
target = self.filesystem.readlink(node)
|
||||
return target
|
||||
|
||||
def FUSE_RENAME(self, req, msg):
|
||||
if not hasattr(self.filesystem, 'rename'):
|
||||
self.notsupp_or_ro()
|
||||
msg, oldname, newname = fuse_rename_in.from_param2(msg)
|
||||
oldnode = self.filesystem.getnode(req.nodeid)
|
||||
newnode = self.filesystem.getnode(msg.newdir)
|
||||
self.filesystem.rename(oldnode, oldname, newnode, newname)
|
||||
|
||||
def getxattrs(self, nodeid):
|
||||
if not hasattr(self.filesystem, 'getxattrs'):
|
||||
raise IOError(errno.ENOSYS, "xattrs not supported")
|
||||
node = self.filesystem.getnode(nodeid)
|
||||
return self.filesystem.getxattrs(node)
|
||||
|
||||
def FUSE_LISTXATTR(self, req, msg):
|
||||
names = self.getxattrs(req.nodeid).keys()
|
||||
names = ['user.' + name for name in names]
|
||||
totalsize = 0
|
||||
for name in names:
|
||||
totalsize += len(name)+1
|
||||
msg = fuse_getxattr_in(msg)
|
||||
if msg.size > 0:
|
||||
if msg.size < totalsize:
|
||||
raise IOError(errno.ERANGE, "buffer too small")
|
||||
names.append('')
|
||||
return '\x00'.join(names)
|
||||
else:
|
||||
return fuse_getxattr_out(size=totalsize)
|
||||
|
||||
def FUSE_GETXATTR(self, req, msg):
|
||||
xattrs = self.getxattrs(req.nodeid)
|
||||
msg, name = fuse_getxattr_in.from_param(msg)
|
||||
if not name.startswith('user.'): # ENODATA == ENOATTR
|
||||
raise IOError(errno.ENODATA, "only supports 'user.' xattrs, "
|
||||
"got %r" % (name,))
|
||||
name = name[5:]
|
||||
try:
|
||||
value = xattrs[name]
|
||||
except KeyError:
|
||||
raise IOError(errno.ENODATA, "no such xattr") # == ENOATTR
|
||||
value = str(value)
|
||||
if msg.size > 0:
|
||||
if msg.size < len(value):
|
||||
raise IOError(errno.ERANGE, "buffer too small")
|
||||
return value
|
||||
else:
|
||||
return fuse_getxattr_out(size=len(value))
|
||||
|
||||
def FUSE_SETXATTR(self, req, msg):
|
||||
xattrs = self.getxattrs(req.nodeid)
|
||||
msg, name, value = fuse_setxattr_in.from_param_head(msg)
|
||||
assert len(value) == msg.size
|
||||
# XXX msg.flags ignored
|
||||
if not name.startswith('user.'): # ENODATA == ENOATTR
|
||||
raise IOError(errno.ENODATA, "only supports 'user.' xattrs")
|
||||
name = name[5:]
|
||||
try:
|
||||
xattrs[name] = value
|
||||
except KeyError:
|
||||
raise IOError(errno.ENODATA, "cannot set xattr") # == ENOATTR
|
||||
|
||||
def FUSE_REMOVEXATTR(self, req, msg):
|
||||
xattrs = self.getxattrs(req.nodeid)
|
||||
name = c2pystr(msg)
|
||||
if not name.startswith('user.'): # ENODATA == ENOATTR
|
||||
raise IOError(errno.ENODATA, "only supports 'user.' xattrs")
|
||||
name = name[5:]
|
||||
try:
|
||||
del xattrs[name]
|
||||
except KeyError:
|
||||
raise IOError(errno.ENODATA, "cannot delete xattr") # == ENOATTR
|
||||
|
||||
|
||||
class NoReply(Exception):
|
||||
pass
|
@ -1,107 +0,0 @@
|
||||
import os, re, urlparse
|
||||
from handler import Handler
|
||||
from objectfs import ObjectFs
|
||||
|
||||
|
||||
class Root:
|
||||
def __init__(self):
|
||||
self.entries = {'gg': GoogleRoot()}
|
||||
def listdir(self):
|
||||
return self.entries.keys()
|
||||
def join(self, hostname):
|
||||
if hostname in self.entries:
|
||||
return self.entries[hostname]
|
||||
if '.' not in hostname:
|
||||
raise KeyError
|
||||
result = HtmlNode('http://%s/' % (hostname,))
|
||||
self.entries[hostname] = result
|
||||
return result
|
||||
|
||||
|
||||
class UrlNode:
|
||||
data = None
|
||||
|
||||
def __init__(self, url):
|
||||
self.url = url
|
||||
|
||||
def getdata(self):
|
||||
if self.data is None:
|
||||
print self.url
|
||||
g = os.popen("lynx -source %r" % (self.url,), 'r')
|
||||
self.data = g.read()
|
||||
g.close()
|
||||
return self.data
|
||||
|
||||
|
||||
class HtmlNode(UrlNode):
|
||||
r_links = re.compile(r'<a\s[^>]*href="([^"]+)"[^>]*>(.*?)</a>',
|
||||
re.IGNORECASE | re.DOTALL)
|
||||
r_images = re.compile(r'<img\s[^>]*src="([^"]+[.]jpg)"', re.IGNORECASE)
|
||||
|
||||
def format(self, text, index,
|
||||
TRANSTBL = ''.join([(32<=c<127 and c!=ord('/'))
|
||||
and chr(c) or '_'
|
||||
for c in range(256)])):
|
||||
return text.translate(TRANSTBL)
|
||||
|
||||
def listdir(self):
|
||||
data = self.getdata()
|
||||
|
||||
seen = {}
|
||||
def uniquename(name):
|
||||
name = self.format(name, len(seen))
|
||||
if name == '' or name.startswith('.'):
|
||||
name = '_' + name
|
||||
basename = name
|
||||
i = 1
|
||||
while name in seen:
|
||||
i += 1
|
||||
name = '%s_%d' % (basename, i)
|
||||
seen[name] = True
|
||||
return name
|
||||
|
||||
for link, text in self.r_links.findall(data):
|
||||
url = urlparse.urljoin(self.url, link)
|
||||
yield uniquename(text), HtmlNode(url)
|
||||
|
||||
for link in self.r_images.findall(data):
|
||||
text = os.path.basename(link)
|
||||
url = urlparse.urljoin(self.url, link)
|
||||
yield uniquename(text), RawNode(url)
|
||||
|
||||
yield '.source', RawNode(self.url)
|
||||
|
||||
|
||||
class RawNode(UrlNode):
|
||||
|
||||
def read(self):
|
||||
return self.getdata()
|
||||
|
||||
def size(self):
|
||||
if self.data:
|
||||
return len(self.data)
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class GoogleRoot:
|
||||
def join(self, query):
|
||||
return GoogleSearch(query)
|
||||
|
||||
class GoogleSearch(HtmlNode):
|
||||
r_links = re.compile(r'<a\sclass=l\s[^>]*href="([^"]+)"[^>]*>(.*?)</a>',
|
||||
re.IGNORECASE | re.DOTALL)
|
||||
|
||||
def __init__(self, query):
|
||||
self.url = 'http://www.google.com/search?q=' + query
|
||||
|
||||
def format(self, text, index):
|
||||
text = text.replace('<b>', '').replace('</b>', '')
|
||||
text = HtmlNode.format(self, text, index)
|
||||
return '%d. %s' % (index, text)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
root = Root()
|
||||
handler = Handler('/home/arigo/mnt', ObjectFs(root))
|
||||
handler.loop_forever()
|
@ -1,405 +0,0 @@
|
||||
from struct import pack, unpack, calcsize
|
||||
import stat
|
||||
|
||||
class Struct(object):
|
||||
__slots__ = []
|
||||
|
||||
def __init__(self, data=None, truncate=False, **fields):
|
||||
if data is not None:
|
||||
if truncate:
|
||||
data = data[:self.calcsize()]
|
||||
self.unpack(data)
|
||||
for key, value in fields.items():
|
||||
setattr(self, key, value)
|
||||
|
||||
def unpack(self, data):
|
||||
data = unpack(self.__types__, data)
|
||||
for key, value in zip(self.__slots__, data):
|
||||
setattr(self, key, value)
|
||||
|
||||
def pack(self):
|
||||
return pack(self.__types__, *[getattr(self, k, 0)
|
||||
for k in self.__slots__])
|
||||
|
||||
def calcsize(cls):
|
||||
return calcsize(cls.__types__)
|
||||
calcsize = classmethod(calcsize)
|
||||
|
||||
def __repr__(self):
|
||||
result = ['%s=%r' % (name, getattr(self, name, None))
|
||||
for name in self.__slots__]
|
||||
return '<%s %s>' % (self.__class__.__name__, ', '.join(result))
|
||||
|
||||
def from_param(cls, msg):
|
||||
limit = cls.calcsize()
|
||||
zero = msg.find('\x00', limit)
|
||||
assert zero >= 0
|
||||
return cls(msg[:limit]), msg[limit:zero]
|
||||
from_param = classmethod(from_param)
|
||||
|
||||
def from_param2(cls, msg):
|
||||
limit = cls.calcsize()
|
||||
zero1 = msg.find('\x00', limit)
|
||||
assert zero1 >= 0
|
||||
zero2 = msg.find('\x00', zero1+1)
|
||||
assert zero2 >= 0
|
||||
return cls(msg[:limit]), msg[limit:zero1], msg[zero1+1:zero2]
|
||||
from_param2 = classmethod(from_param2)
|
||||
|
||||
def from_head(cls, msg):
|
||||
limit = cls.calcsize()
|
||||
return cls(msg[:limit]), msg[limit:]
|
||||
from_head = classmethod(from_head)
|
||||
|
||||
def from_param_head(cls, msg):
|
||||
limit = cls.calcsize()
|
||||
zero = msg.find('\x00', limit)
|
||||
assert zero >= 0
|
||||
return cls(msg[:limit]), msg[limit:zero], msg[zero+1:]
|
||||
from_param_head = classmethod(from_param_head)
|
||||
|
||||
class StructWithAttr(Struct):
|
||||
|
||||
def unpack(self, data):
|
||||
limit = -fuse_attr.calcsize()
|
||||
super(StructWithAttr, self).unpack(data[:limit])
|
||||
self.attr = fuse_attr(data[limit:])
|
||||
|
||||
def pack(self):
|
||||
return super(StructWithAttr, self).pack() + self.attr.pack()
|
||||
|
||||
def calcsize(cls):
|
||||
return super(StructWithAttr, cls).calcsize() + fuse_attr.calcsize()
|
||||
calcsize = classmethod(calcsize)
|
||||
|
||||
|
||||
def _mkstruct(name, c, base=Struct):
|
||||
typ2code = {
|
||||
'__u32': 'I',
|
||||
'__s32': 'i',
|
||||
'__u64': 'Q',
|
||||
'__s64': 'q'}
|
||||
slots = []
|
||||
types = ['=']
|
||||
for line in c.split('\n'):
|
||||
line = line.strip()
|
||||
if line:
|
||||
line, tail = line.split(';', 1)
|
||||
typ, nam = line.split()
|
||||
slots.append(nam)
|
||||
types.append(typ2code[typ])
|
||||
cls = type(name, (base,), {'__slots__': slots,
|
||||
'__types__': ''.join(types)})
|
||||
globals()[name] = cls
|
||||
|
||||
class timeval(object):
|
||||
|
||||
def __init__(self, attr1, attr2):
|
||||
self.sec = attr1
|
||||
self.nsec = attr2
|
||||
|
||||
def __get__(self, obj, typ=None):
|
||||
if obj is None:
|
||||
return self
|
||||
else:
|
||||
return (getattr(obj, self.sec) +
|
||||
getattr(obj, self.nsec) * 0.000000001)
|
||||
|
||||
def __set__(self, obj, val):
|
||||
val = int(val * 1000000000)
|
||||
sec, nsec = divmod(val, 1000000000)
|
||||
setattr(obj, self.sec, sec)
|
||||
setattr(obj, self.nsec, nsec)
|
||||
|
||||
def __delete__(self, obj):
|
||||
delattr(obj, self.sec)
|
||||
delattr(obj, self.nsec)
|
||||
|
||||
def _mktimeval(cls, attr1, attr2):
|
||||
assert attr1.startswith('_')
|
||||
assert attr2.startswith('_')
|
||||
tv = timeval(attr1, attr2)
|
||||
setattr(cls, attr1[1:], tv)
|
||||
|
||||
INVALID_INO = 0xFFFFFFFFFFFFFFFF
|
||||
|
||||
def mode2type(mode):
|
||||
return (mode & 0170000) >> 12
|
||||
|
||||
TYPE_REG = mode2type(stat.S_IFREG)
|
||||
TYPE_DIR = mode2type(stat.S_IFDIR)
|
||||
TYPE_LNK = mode2type(stat.S_IFLNK)
|
||||
|
||||
def c2pystr(s):
|
||||
n = s.find('\x00')
|
||||
assert n >= 0
|
||||
return s[:n]
|
||||
|
||||
def c2pystr2(s):
|
||||
first = c2pystr(s)
|
||||
second = c2pystr(s[len(first)+1:])
|
||||
return first, second
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
# Version number of this interface
|
||||
FUSE_KERNEL_VERSION = 7
|
||||
|
||||
# Minor version number of this interface
|
||||
FUSE_KERNEL_MINOR_VERSION = 2
|
||||
|
||||
# The node ID of the root inode
|
||||
FUSE_ROOT_ID = 1
|
||||
|
||||
# The major number of the fuse character device
|
||||
FUSE_MAJOR = 10
|
||||
|
||||
# The minor number of the fuse character device
|
||||
FUSE_MINOR = 229
|
||||
|
||||
# Make sure all structures are padded to 64bit boundary, so 32bit
|
||||
# userspace works under 64bit kernels
|
||||
|
||||
_mkstruct('fuse_attr', '''
|
||||
__u64 ino;
|
||||
__u64 size;
|
||||
__u64 blocks;
|
||||
__u64 _atime;
|
||||
__u64 _mtime;
|
||||
__u64 _ctime;
|
||||
__u32 _atimensec;
|
||||
__u32 _mtimensec;
|
||||
__u32 _ctimensec;
|
||||
__u32 mode;
|
||||
__u32 nlink;
|
||||
__u32 uid;
|
||||
__u32 gid;
|
||||
__u32 rdev;
|
||||
''')
|
||||
_mktimeval(fuse_attr, '_atime', '_atimensec')
|
||||
_mktimeval(fuse_attr, '_mtime', '_mtimensec')
|
||||
_mktimeval(fuse_attr, '_ctime', '_ctimensec')
|
||||
|
||||
_mkstruct('fuse_kstatfs', '''
|
||||
__u64 blocks;
|
||||
__u64 bfree;
|
||||
__u64 bavail;
|
||||
__u64 files;
|
||||
__u64 ffree;
|
||||
__u32 bsize;
|
||||
__u32 namelen;
|
||||
''')
|
||||
|
||||
FATTR_MODE = 1 << 0
|
||||
FATTR_UID = 1 << 1
|
||||
FATTR_GID = 1 << 2
|
||||
FATTR_SIZE = 1 << 3
|
||||
FATTR_ATIME = 1 << 4
|
||||
FATTR_MTIME = 1 << 5
|
||||
|
||||
#
|
||||
# Flags returned by the OPEN request
|
||||
#
|
||||
# FOPEN_DIRECT_IO: bypass page cache for this open file
|
||||
# FOPEN_KEEP_CACHE: don't invalidate the data cache on open
|
||||
#
|
||||
FOPEN_DIRECT_IO = 1 << 0
|
||||
FOPEN_KEEP_CACHE = 1 << 1
|
||||
|
||||
fuse_opcode = {
|
||||
'FUSE_LOOKUP' : 1,
|
||||
'FUSE_FORGET' : 2, # no reply
|
||||
'FUSE_GETATTR' : 3,
|
||||
'FUSE_SETATTR' : 4,
|
||||
'FUSE_READLINK' : 5,
|
||||
'FUSE_SYMLINK' : 6,
|
||||
'FUSE_MKNOD' : 8,
|
||||
'FUSE_MKDIR' : 9,
|
||||
'FUSE_UNLINK' : 10,
|
||||
'FUSE_RMDIR' : 11,
|
||||
'FUSE_RENAME' : 12,
|
||||
'FUSE_LINK' : 13,
|
||||
'FUSE_OPEN' : 14,
|
||||
'FUSE_READ' : 15,
|
||||
'FUSE_WRITE' : 16,
|
||||
'FUSE_STATFS' : 17,
|
||||
'FUSE_RELEASE' : 18,
|
||||
'FUSE_FSYNC' : 20,
|
||||
'FUSE_SETXATTR' : 21,
|
||||
'FUSE_GETXATTR' : 22,
|
||||
'FUSE_LISTXATTR' : 23,
|
||||
'FUSE_REMOVEXATTR' : 24,
|
||||
'FUSE_FLUSH' : 25,
|
||||
'FUSE_INIT' : 26,
|
||||
'FUSE_OPENDIR' : 27,
|
||||
'FUSE_READDIR' : 28,
|
||||
'FUSE_RELEASEDIR' : 29,
|
||||
'FUSE_FSYNCDIR' : 30,
|
||||
}
|
||||
|
||||
fuse_opcode2name = []
|
||||
def setup():
|
||||
for key, value in fuse_opcode.items():
|
||||
fuse_opcode2name.extend([None] * (value+1 - len(fuse_opcode2name)))
|
||||
fuse_opcode2name[value] = key
|
||||
setup()
|
||||
del setup
|
||||
|
||||
# Conservative buffer size for the client
|
||||
FUSE_MAX_IN = 8192
|
||||
|
||||
FUSE_NAME_MAX = 1024
|
||||
FUSE_SYMLINK_MAX = 4096
|
||||
FUSE_XATTR_SIZE_MAX = 4096
|
||||
|
||||
_mkstruct('fuse_entry_out', """
|
||||
__u64 nodeid; /* Inode ID */
|
||||
__u64 generation; /* Inode generation: nodeid:gen must \
|
||||
be unique for the fs's lifetime */
|
||||
__u64 _entry_valid; /* Cache timeout for the name */
|
||||
__u64 _attr_valid; /* Cache timeout for the attributes */
|
||||
__u32 _entry_valid_nsec;
|
||||
__u32 _attr_valid_nsec;
|
||||
""", base=StructWithAttr)
|
||||
_mktimeval(fuse_entry_out, '_entry_valid', '_entry_valid_nsec')
|
||||
_mktimeval(fuse_entry_out, '_attr_valid', '_attr_valid_nsec')
|
||||
|
||||
_mkstruct('fuse_forget_in', '''
|
||||
__u64 nlookup;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_attr_out', '''
|
||||
__u64 _attr_valid; /* Cache timeout for the attributes */
|
||||
__u32 _attr_valid_nsec;
|
||||
__u32 dummy;
|
||||
''', base=StructWithAttr)
|
||||
_mktimeval(fuse_attr_out, '_attr_valid', '_attr_valid_nsec')
|
||||
|
||||
_mkstruct('fuse_mknod_in', '''
|
||||
__u32 mode;
|
||||
__u32 rdev;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_mkdir_in', '''
|
||||
__u32 mode;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_rename_in', '''
|
||||
__u64 newdir;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_link_in', '''
|
||||
__u64 oldnodeid;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_setattr_in', '''
|
||||
__u32 valid;
|
||||
__u32 padding;
|
||||
''', base=StructWithAttr)
|
||||
|
||||
_mkstruct('fuse_open_in', '''
|
||||
__u32 flags;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_open_out', '''
|
||||
__u64 fh;
|
||||
__u32 open_flags;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_release_in', '''
|
||||
__u64 fh;
|
||||
__u32 flags;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_flush_in', '''
|
||||
__u64 fh;
|
||||
__u32 flush_flags;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_read_in', '''
|
||||
__u64 fh;
|
||||
__u64 offset;
|
||||
__u32 size;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_write_in', '''
|
||||
__u64 fh;
|
||||
__u64 offset;
|
||||
__u32 size;
|
||||
__u32 write_flags;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_write_out', '''
|
||||
__u32 size;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
fuse_statfs_out = fuse_kstatfs
|
||||
|
||||
_mkstruct('fuse_fsync_in', '''
|
||||
__u64 fh;
|
||||
__u32 fsync_flags;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_setxattr_in', '''
|
||||
__u32 size;
|
||||
__u32 flags;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_getxattr_in', '''
|
||||
__u32 size;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_getxattr_out', '''
|
||||
__u32 size;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_init_in_out', '''
|
||||
__u32 major;
|
||||
__u32 minor;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_in_header', '''
|
||||
__u32 len;
|
||||
__u32 opcode;
|
||||
__u64 unique;
|
||||
__u64 nodeid;
|
||||
__u32 uid;
|
||||
__u32 gid;
|
||||
__u32 pid;
|
||||
__u32 padding;
|
||||
''')
|
||||
|
||||
_mkstruct('fuse_out_header', '''
|
||||
__u32 len;
|
||||
__s32 error;
|
||||
__u64 unique;
|
||||
''')
|
||||
|
||||
class fuse_dirent(Struct):
|
||||
__slots__ = ['ino', 'off', 'type', 'name']
|
||||
|
||||
def unpack(self, data):
|
||||
self.ino, self.off, namelen, self.type = struct.unpack('QQII',
|
||||
data[:24])
|
||||
self.name = data[24:24+namelen]
|
||||
assert len(self.name) == namelen
|
||||
|
||||
def pack(self):
|
||||
namelen = len(self.name)
|
||||
return pack('QQII%ds' % ((namelen+7)&~7,),
|
||||
self.ino, getattr(self, 'off', 0), namelen,
|
||||
self.type, self.name)
|
||||
|
||||
def calcsize(cls, namelen):
|
||||
return 24 + ((namelen+7)&~7)
|
||||
calcsize = classmethod(calcsize)
|
@ -1,155 +0,0 @@
|
||||
from kernel import *
|
||||
from handler import Handler
|
||||
import stat, time, os, weakref, errno
|
||||
from cStringIO import StringIO
|
||||
|
||||
|
||||
class MemoryFS(object):
|
||||
INFINITE = 86400.0
|
||||
|
||||
|
||||
class Dir(object):
|
||||
type = TYPE_DIR
|
||||
def __init__(self, attr):
|
||||
self.attr = attr
|
||||
self.contents = {} # { 'filename': Dir()/File()/SymLink() }
|
||||
|
||||
class File(object):
|
||||
type = TYPE_REG
|
||||
def __init__(self, attr):
|
||||
self.attr = attr
|
||||
self.data = StringIO()
|
||||
|
||||
class SymLink(object):
|
||||
type = TYPE_LNK
|
||||
def __init__(self, attr, target):
|
||||
self.attr = attr
|
||||
self.target = target
|
||||
|
||||
|
||||
def __init__(self, root=None):
|
||||
self.uid = os.getuid()
|
||||
self.gid = os.getgid()
|
||||
self.umask = os.umask(0); os.umask(self.umask)
|
||||
self.root = root or self.Dir(self.newattr(stat.S_IFDIR))
|
||||
self.root.id = FUSE_ROOT_ID
|
||||
self.nodes = weakref.WeakValueDictionary()
|
||||
self.nodes[FUSE_ROOT_ID] = self.root
|
||||
self.nextid = FUSE_ROOT_ID + 1
|
||||
|
||||
def newattr(self, s, ino=None, mode=0666):
|
||||
now = time.time()
|
||||
attr = fuse_attr(size = 0,
|
||||
mode = s | (mode & ~self.umask),
|
||||
nlink = 1, # even on dirs! this confuses 'find' in
|
||||
# a good way :-)
|
||||
atime = now,
|
||||
mtime = now,
|
||||
ctime = now,
|
||||
uid = self.uid,
|
||||
gid = self.gid)
|
||||
if ino is None:
|
||||
ino = id(attr)
|
||||
if ino < 0:
|
||||
ino = ~ino
|
||||
attr.ino = ino
|
||||
return attr
|
||||
|
||||
def getnode(self, id):
|
||||
return self.nodes[id]
|
||||
|
||||
def modified(self, node):
|
||||
node.attr.mtime = node.attr.atime = time.time()
|
||||
if isinstance(node, self.File):
|
||||
node.data.seek(0, 2)
|
||||
node.attr.size = node.data.tell()
|
||||
|
||||
def getattr(self, node):
|
||||
return node.attr, self.INFINITE
|
||||
|
||||
def setattr(self, node, mode, uid, gid, size, atime, mtime):
|
||||
if mode is not None:
|
||||
node.attr.mode = (node.attr.mode & ~0777) | (mode & 0777)
|
||||
if uid is not None:
|
||||
node.attr.uid = uid
|
||||
if gid is not None:
|
||||
node.attr.gid = gid
|
||||
if size is not None:
|
||||
assert isinstance(node, self.File)
|
||||
node.data.seek(0, 2)
|
||||
oldsize = node.data.tell()
|
||||
if size < oldsize:
|
||||
node.data.seek(size)
|
||||
node.data.truncate()
|
||||
self.modified(node)
|
||||
elif size > oldsize:
|
||||
node.data.write('\x00' * (size - oldsize))
|
||||
self.modified(node)
|
||||
if atime is not None:
|
||||
node.attr.atime = atime
|
||||
if mtime is not None:
|
||||
node.attr.mtime = mtime
|
||||
|
||||
def listdir(self, node):
|
||||
assert isinstance(node, self.Dir)
|
||||
for name, subobj in node.contents.items():
|
||||
yield name, subobj.type
|
||||
|
||||
def lookup(self, dirnode, filename):
|
||||
try:
|
||||
return dirnode.contents[filename].id, self.INFINITE
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, filename)
|
||||
|
||||
def open(self, filenode, flags):
|
||||
return filenode.data
|
||||
|
||||
def newnodeid(self, newnode):
|
||||
id = self.nextid
|
||||
self.nextid += 1
|
||||
newnode.id = id
|
||||
self.nodes[id] = newnode
|
||||
return id
|
||||
|
||||
def mknod(self, dirnode, filename, mode):
|
||||
node = self.File(self.newattr(stat.S_IFREG, mode=mode))
|
||||
dirnode.contents[filename] = node
|
||||
return self.newnodeid(node), self.INFINITE
|
||||
|
||||
def mkdir(self, dirnode, subdirname, mode):
|
||||
node = self.Dir(self.newattr(stat.S_IFDIR, mode=mode))
|
||||
dirnode.contents[subdirname] = node
|
||||
return self.newnodeid(node), self.INFINITE
|
||||
|
||||
def symlink(self, dirnode, linkname, target):
|
||||
node = self.SymLink(self.newattr(stat.S_IFLNK), target)
|
||||
dirnode.contents[linkname] = node
|
||||
return self.newnodeid(node), self.INFINITE
|
||||
|
||||
def unlink(self, dirnode, filename):
|
||||
del dirnode.contents[filename]
|
||||
|
||||
rmdir = unlink
|
||||
|
||||
def readlink(self, symlinknode):
|
||||
return symlinknode.target
|
||||
|
||||
def rename(self, olddirnode, oldname, newdirnode, newname):
|
||||
node = olddirnode.contents[oldname]
|
||||
newdirnode.contents[newname] = node
|
||||
del olddirnode.contents[oldname]
|
||||
|
||||
def getxattrs(self, node):
|
||||
try:
|
||||
return node.xattrs
|
||||
except AttributeError:
|
||||
node.xattrs = {}
|
||||
return node.xattrs
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
mountpoint = sys.argv[1]
|
||||
memoryfs = MemoryFS()
|
||||
handler = Handler(mountpoint, memoryfs)
|
||||
handler.loop_forever()
|
@ -1,191 +0,0 @@
|
||||
"""
|
||||
For reading and caching from slow file system (e.g. DVDs or network).
|
||||
|
||||
python mirrorfs.py <sourcedir> <cachedir> <mountpoint>
|
||||
|
||||
Makes <mountpoint> show a read-only copy of the files in <sourcedir>,
|
||||
caching all data ever read in the <cachedir> to avoid reading it
|
||||
twice. This script also features optimistic read-ahead: once a
|
||||
file is accessed, and as long as no other file is accessed, the
|
||||
whole file is read and cached as fast as the <sourcedir> will
|
||||
provide it.
|
||||
|
||||
You have to clean up <cachedir> manually before mounting a modified
|
||||
or different <sourcedir>.
|
||||
"""
|
||||
import sys, os, posixpath, stat
|
||||
|
||||
try:
|
||||
__file__
|
||||
except NameError:
|
||||
__file__ = sys.argv[0]
|
||||
this_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
sys.path.append(os.path.dirname(this_dir))
|
||||
from blockfs import valuetree
|
||||
from handler import Handler
|
||||
import greenhandler, greensock
|
||||
from objectfs import ObjectFs
|
||||
|
||||
BLOCKSIZE = 65536
|
||||
|
||||
class MirrorFS(ObjectFs):
|
||||
rawfd = None
|
||||
|
||||
def __init__(self, srcdir, cachedir):
|
||||
self.srcdir = srcdir
|
||||
self.cachedir = cachedir
|
||||
self.table = valuetree.ValueTree(os.path.join(cachedir, 'table'), 'q')
|
||||
if '' not in self.table:
|
||||
self.initial_read_dir('')
|
||||
self.table[''] = -1,
|
||||
try:
|
||||
self.rawfile = open(os.path.join(cachedir, 'raw'), 'r+b')
|
||||
except IOError:
|
||||
self.rawfile = open(os.path.join(cachedir, 'raw'), 'w+b')
|
||||
ObjectFs.__init__(self, DirNode(self, ''))
|
||||
self.readahead_at = None
|
||||
greenhandler.autogreenlet(self.readahead)
|
||||
|
||||
def close(self):
|
||||
self.table.close()
|
||||
|
||||
def readahead(self):
|
||||
while True:
|
||||
greensock.sleep(0.001)
|
||||
while not self.readahead_at:
|
||||
greensock.sleep(1)
|
||||
path, blocknum = self.readahead_at
|
||||
self.readahead_at = None
|
||||
try:
|
||||
self.readblock(path, blocknum, really=False)
|
||||
except EOFError:
|
||||
pass
|
||||
|
||||
def initial_read_dir(self, path):
|
||||
print 'Reading initial directory structure...', path
|
||||
dirname = os.path.join(self.srcdir, path)
|
||||
for name in os.listdir(dirname):
|
||||
filename = os.path.join(dirname, name)
|
||||
st = os.stat(filename)
|
||||
if stat.S_ISDIR(st.st_mode):
|
||||
self.initial_read_dir(posixpath.join(path, name))
|
||||
q = -1
|
||||
else:
|
||||
q = st.st_size
|
||||
self.table[posixpath.join(path, name)] = q,
|
||||
|
||||
def __getitem__(self, key):
|
||||
self.tablelock.acquire()
|
||||
try:
|
||||
return self.table[key]
|
||||
finally:
|
||||
self.tablelock.release()
|
||||
|
||||
def readblock(self, path, blocknum, really=True):
|
||||
s = '%s/%d' % (path, blocknum)
|
||||
try:
|
||||
q, = self.table[s]
|
||||
except KeyError:
|
||||
print s
|
||||
self.readahead_at = None
|
||||
f = open(os.path.join(self.srcdir, path), 'rb')
|
||||
f.seek(blocknum * BLOCKSIZE)
|
||||
data = f.read(BLOCKSIZE)
|
||||
f.close()
|
||||
if not data:
|
||||
q = -2
|
||||
else:
|
||||
data += '\x00' * (BLOCKSIZE - len(data))
|
||||
self.rawfile.seek(0, 2)
|
||||
q = self.rawfile.tell()
|
||||
self.rawfile.write(data)
|
||||
self.table[s] = q,
|
||||
if q == -2:
|
||||
raise EOFError
|
||||
else:
|
||||
if q == -2:
|
||||
raise EOFError
|
||||
if really:
|
||||
self.rawfile.seek(q, 0)
|
||||
data = self.rawfile.read(BLOCKSIZE)
|
||||
else:
|
||||
data = None
|
||||
if self.readahead_at is None:
|
||||
self.readahead_at = path, blocknum + 1
|
||||
return data
|
||||
|
||||
|
||||
class Node(object):
|
||||
|
||||
def __init__(self, mfs, path):
|
||||
self.mfs = mfs
|
||||
self.path = path
|
||||
|
||||
class DirNode(Node):
|
||||
|
||||
def join(self, name):
|
||||
path = posixpath.join(self.path, name)
|
||||
q, = self.mfs.table[path]
|
||||
if q == -1:
|
||||
return DirNode(self.mfs, path)
|
||||
else:
|
||||
return FileNode(self.mfs, path)
|
||||
|
||||
def listdir(self):
|
||||
result = []
|
||||
for key, value in self.mfs.table.iteritemsfrom(self.path):
|
||||
if not key.startswith(self.path):
|
||||
break
|
||||
tail = key[len(self.path):].lstrip('/')
|
||||
if tail and '/' not in tail:
|
||||
result.append(tail)
|
||||
return result
|
||||
|
||||
class FileNode(Node):
|
||||
|
||||
def size(self):
|
||||
q, = self.mfs.table[self.path]
|
||||
return q
|
||||
|
||||
def read(self):
|
||||
return FileStream(self.mfs, self.path)
|
||||
|
||||
class FileStream(object):
|
||||
|
||||
def __init__(self, mfs, path):
|
||||
self.mfs = mfs
|
||||
self.path = path
|
||||
self.pos = 0
|
||||
self.size, = self.mfs.table[path]
|
||||
|
||||
def seek(self, p):
|
||||
self.pos = p
|
||||
|
||||
def read(self, count):
|
||||
result = []
|
||||
end = min(self.pos + count, self.size)
|
||||
while self.pos < end:
|
||||
blocknum, offset = divmod(self.pos, BLOCKSIZE)
|
||||
data = self.mfs.readblock(self.path, blocknum)
|
||||
data = data[offset:]
|
||||
data = data[:end - self.pos]
|
||||
assert len(data) > 0
|
||||
result.append(data)
|
||||
self.pos += len(data)
|
||||
return ''.join(result)
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
srcdir, cachedir, mountpoint = sys.argv[1:]
|
||||
mirrorfs = MirrorFS(srcdir, cachedir)
|
||||
try:
|
||||
handler = Handler(mountpoint, mirrorfs)
|
||||
greenhandler.add_handler(handler)
|
||||
greenhandler.mainloop()
|
||||
finally:
|
||||
mirrorfs.close()
|
@ -1,174 +0,0 @@
|
||||
from kernel import *
|
||||
import stat, errno, os, time
|
||||
from cStringIO import StringIO
|
||||
from OrderedDict import OrderedDict
|
||||
|
||||
|
||||
class ObjectFs:
|
||||
"""A simple read-only file system based on Python objects.
|
||||
|
||||
Interface of Directory objects:
|
||||
* join(name) returns a file or subdirectory object
|
||||
* listdir() returns a list of names, or a list of (name, object)
|
||||
|
||||
join() is optional if listdir() returns a list of (name, object).
|
||||
Alternatively, Directory objects can be plain dictionaries {name: object}.
|
||||
|
||||
Interface of File objects:
|
||||
* size() returns the size
|
||||
* read() returns the data
|
||||
|
||||
Alternatively, File objects can be plain strings.
|
||||
|
||||
Interface of SymLink objects:
|
||||
* readlink() returns the symlink's target, as a string
|
||||
"""
|
||||
|
||||
INFINITE = 86400.0
|
||||
USE_DIR_CACHE = True
|
||||
|
||||
def __init__(self, rootnode):
|
||||
self.nodes = {FUSE_ROOT_ID: rootnode}
|
||||
if self.USE_DIR_CACHE:
|
||||
self.dircache = {}
|
||||
self.starttime = time.time()
|
||||
self.uid = os.getuid()
|
||||
self.gid = os.getgid()
|
||||
self.umask = os.umask(0); os.umask(self.umask)
|
||||
|
||||
def newattr(self, s, ino, mode=0666):
|
||||
if ino < 0:
|
||||
ino = ~ino
|
||||
return fuse_attr(ino = ino,
|
||||
size = 0,
|
||||
mode = s | (mode & ~self.umask),
|
||||
nlink = 1, # even on dirs! this confuses 'find' in
|
||||
# a good way :-)
|
||||
atime = self.starttime,
|
||||
mtime = self.starttime,
|
||||
ctime = self.starttime,
|
||||
uid = self.uid,
|
||||
gid = self.gid)
|
||||
|
||||
def getnode(self, nodeid):
|
||||
try:
|
||||
return self.nodes[nodeid]
|
||||
except KeyError:
|
||||
raise IOError(errno.ESTALE, nodeid)
|
||||
|
||||
def getattr(self, node):
|
||||
timeout = self.INFINITE
|
||||
if isinstance(node, str):
|
||||
attr = self.newattr(stat.S_IFREG, id(node))
|
||||
attr.size = len(node)
|
||||
elif hasattr(node, 'readlink'):
|
||||
target = node.readlink()
|
||||
attr = self.newattr(stat.S_IFLNK, id(node))
|
||||
attr.size = len(target)
|
||||
attr.mode |= 0777
|
||||
elif hasattr(node, 'size'):
|
||||
sz = node.size()
|
||||
attr = self.newattr(stat.S_IFREG, id(node))
|
||||
if sz is None:
|
||||
timeout = 0
|
||||
else:
|
||||
attr.size = sz
|
||||
else:
|
||||
attr = self.newattr(stat.S_IFDIR, id(node), mode=0777)
|
||||
#print 'getattr(%s) -> %s, %s' % (node, attr, timeout)
|
||||
return attr, timeout
|
||||
|
||||
def getentries(self, node):
|
||||
if isinstance(node, dict):
|
||||
return node
|
||||
try:
|
||||
if not self.USE_DIR_CACHE:
|
||||
raise KeyError
|
||||
return self.dircache[node]
|
||||
except KeyError:
|
||||
entries = OrderedDict()
|
||||
if hasattr(node, 'listdir'):
|
||||
for name in node.listdir():
|
||||
if isinstance(name, tuple):
|
||||
name, subnode = name
|
||||
else:
|
||||
subnode = None
|
||||
entries[name] = subnode
|
||||
if self.USE_DIR_CACHE:
|
||||
self.dircache[node] = entries
|
||||
return entries
|
||||
|
||||
def listdir(self, node):
|
||||
entries = self.getentries(node)
|
||||
for name, subnode in entries.items():
|
||||
if subnode is None:
|
||||
subnode = node.join(name)
|
||||
self.nodes[uid(subnode)] = subnode
|
||||
entries[name] = subnode
|
||||
if isinstance(subnode, str):
|
||||
yield name, TYPE_REG
|
||||
elif hasattr(subnode, 'readlink'):
|
||||
yield name, TYPE_LNK
|
||||
elif hasattr(subnode, 'size'):
|
||||
yield name, TYPE_REG
|
||||
else:
|
||||
yield name, TYPE_DIR
|
||||
|
||||
def lookup(self, node, name):
|
||||
entries = self.getentries(node)
|
||||
try:
|
||||
subnode = entries.get(name)
|
||||
if subnode is None:
|
||||
if hasattr(node, 'join'):
|
||||
subnode = node.join(name)
|
||||
entries[name] = subnode
|
||||
else:
|
||||
raise KeyError
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, name)
|
||||
else:
|
||||
return self.reply(subnode)
|
||||
|
||||
def reply(self, node):
|
||||
res = uid(node)
|
||||
self.nodes[res] = node
|
||||
return res, self.INFINITE
|
||||
|
||||
def open(self, node, mode):
|
||||
if not isinstance(node, str):
|
||||
node = node.read()
|
||||
if not hasattr(node, 'read'):
|
||||
node = StringIO(node)
|
||||
return node
|
||||
|
||||
def readlink(self, node):
|
||||
return node.readlink()
|
||||
|
||||
def getxattrs(self, node):
|
||||
return getattr(node, '__dict__', {})
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
import struct
|
||||
try:
|
||||
HUGEVAL = 256 ** struct.calcsize('P')
|
||||
except struct.error:
|
||||
HUGEVAL = 0
|
||||
|
||||
def fixid(result):
|
||||
if result < 0:
|
||||
result += HUGEVAL
|
||||
return result
|
||||
|
||||
def uid(obj):
|
||||
"""
|
||||
Return the id of an object as an unsigned number so that its hex
|
||||
representation makes sense
|
||||
"""
|
||||
return fixid(id(obj))
|
||||
|
||||
class SymLink(object):
|
||||
def __init__(self, target):
|
||||
self.target = target
|
||||
def readlink(self):
|
||||
return self.target
|
@ -1,63 +0,0 @@
|
||||
"""
|
||||
Two magic tricks for classes:
|
||||
|
||||
class X:
|
||||
__metaclass__ = extendabletype
|
||||
...
|
||||
|
||||
# in some other file...
|
||||
class __extend__(X):
|
||||
... # and here you can add new methods and class attributes to X
|
||||
|
||||
Mostly useful together with the second trick, which lets you build
|
||||
methods whose 'self' is a pair of objects instead of just one:
|
||||
|
||||
class __extend__(pairtype(X, Y)):
|
||||
attribute = 42
|
||||
def method((x, y), other, arguments):
|
||||
...
|
||||
|
||||
pair(x, y).attribute
|
||||
pair(x, y).method(other, arguments)
|
||||
|
||||
This finds methods and class attributes based on the actual
|
||||
class of both objects that go into the pair(), with the usual
|
||||
rules of method/attribute overriding in (pairs of) subclasses.
|
||||
|
||||
For more information, see test_pairtype.
|
||||
"""
|
||||
|
||||
class extendabletype(type):
|
||||
"""A type with a syntax trick: 'class __extend__(t)' actually extends
|
||||
the definition of 't' instead of creating a new subclass."""
|
||||
def __new__(cls, name, bases, dict):
|
||||
if name == '__extend__':
|
||||
for cls in bases:
|
||||
for key, value in dict.items():
|
||||
if key == '__module__':
|
||||
continue
|
||||
# XXX do we need to provide something more for pickling?
|
||||
setattr(cls, key, value)
|
||||
return None
|
||||
else:
|
||||
return super(extendabletype, cls).__new__(cls, name, bases, dict)
|
||||
|
||||
|
||||
def pair(a, b):
|
||||
"""Return a pair object."""
|
||||
tp = pairtype(a.__class__, b.__class__)
|
||||
return tp((a, b)) # tp is a subclass of tuple
|
||||
|
||||
pairtypecache = {}
|
||||
|
||||
def pairtype(cls1, cls2):
|
||||
"""type(pair(a,b)) is pairtype(a.__class__, b.__class__)."""
|
||||
try:
|
||||
pair = pairtypecache[cls1, cls2]
|
||||
except KeyError:
|
||||
name = 'pairtype(%s, %s)' % (cls1.__name__, cls2.__name__)
|
||||
bases1 = [pairtype(base1, cls2) for base1 in cls1.__bases__]
|
||||
bases2 = [pairtype(cls1, base2) for base2 in cls2.__bases__]
|
||||
bases = tuple(bases1 + bases2) or (tuple,) # 'tuple': ultimate base
|
||||
pair = pairtypecache[cls1, cls2] = extendabletype(name, bases, {})
|
||||
return pair
|
@ -1,92 +0,0 @@
|
||||
from kernel import *
|
||||
import errno, posixpath, os
|
||||
|
||||
|
||||
class PathFs(object):
|
||||
"""Base class for a read-write FUSE file system interface
|
||||
whose underlying content is best accessed with '/'-separated
|
||||
string paths.
|
||||
"""
|
||||
uid = os.getuid()
|
||||
gid = os.getgid()
|
||||
umask = os.umask(0); os.umask(umask)
|
||||
timeout = 86400.0
|
||||
|
||||
def __init__(self, root=''):
|
||||
self._paths = {FUSE_ROOT_ID: root}
|
||||
self._path2id = {root: FUSE_ROOT_ID}
|
||||
self._nextid = FUSE_ROOT_ID + 1
|
||||
|
||||
def getnode(self, nodeid):
|
||||
try:
|
||||
return self._paths[nodeid]
|
||||
except KeyError:
|
||||
raise IOError(errno.ESTALE, nodeid)
|
||||
|
||||
def forget(self, nodeid):
|
||||
try:
|
||||
p = self._paths.pop(nodeid)
|
||||
del self._path2id[p]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
def cachepath(self, path):
|
||||
if path in self._path2id:
|
||||
return self._path2id[path]
|
||||
id = self._nextid
|
||||
self._nextid += 1
|
||||
self._paths[id] = path
|
||||
self._path2id[path] = id
|
||||
return id
|
||||
|
||||
def mkattr(self, path, size, st_kind, mode, time):
|
||||
attr = fuse_attr(ino = self._path2id[path],
|
||||
size = size,
|
||||
mode = st_kind | (mode & ~self.umask),
|
||||
nlink = 1, # even on dirs! this confuses 'find' in
|
||||
# a good way :-)
|
||||
atime = time,
|
||||
mtime = time,
|
||||
ctime = time,
|
||||
uid = self.uid,
|
||||
gid = self.gid)
|
||||
return attr, self.timeout
|
||||
|
||||
def lookup(self, path, name):
|
||||
npath = posixpath.join(path, name)
|
||||
if not self.check_path(npath):
|
||||
raise IOError(errno.ENOENT, name)
|
||||
return self.cachepath(npath), self.timeout
|
||||
|
||||
def mknod(self, path, name, mode):
|
||||
npath = posixpath.join(path, name)
|
||||
self.mknod_path(npath, mode)
|
||||
return self.cachepath(npath), self.timeout
|
||||
|
||||
def mkdir(self, path, name, mode):
|
||||
npath = posixpath.join(path, name)
|
||||
self.mkdir_path(npath, mode)
|
||||
return self.cachepath(npath), self.timeout
|
||||
|
||||
def unlink(self, path, name):
|
||||
npath = posixpath.join(path, name)
|
||||
self.unlink_path(npath)
|
||||
|
||||
def rmdir(self, path, name):
|
||||
npath = posixpath.join(path, name)
|
||||
self.rmdir_path(npath)
|
||||
|
||||
def rename(self, oldpath, oldname, newpath, newname):
|
||||
noldpath = posixpath.join(oldpath, oldname)
|
||||
nnewpath = posixpath.join(newpath, newname)
|
||||
if not self.rename_path(noldpath, nnewpath):
|
||||
raise IOError(errno.ENOENT, oldname)
|
||||
# fix all paths in the cache
|
||||
N = len(noldpath)
|
||||
for id, path in self._paths.items():
|
||||
if path.startswith(noldpath):
|
||||
if len(path) == N or path[N] == '/':
|
||||
del self._path2id[path]
|
||||
path = nnewpath + path[N:]
|
||||
self._paths[id] = path
|
||||
self._path2id[path] = id
|
@ -1,181 +0,0 @@
|
||||
from kernel import *
|
||||
import errno, posixpath, weakref
|
||||
from time import time as now
|
||||
from stat import S_IFDIR, S_IFREG, S_IFMT
|
||||
from cStringIO import StringIO
|
||||
from handler import Handler
|
||||
from pathfs import PathFs
|
||||
from pysvn.ra_filesystem import SvnRepositoryFilesystem
|
||||
import pysvn.date
|
||||
|
||||
|
||||
class SvnFS(PathFs):
|
||||
|
||||
def __init__(self, svnurl, root=''):
|
||||
super(SvnFS, self).__init__(root)
|
||||
self.svnurl = svnurl
|
||||
self.openfiles = weakref.WeakValueDictionary()
|
||||
self.creationtimes = {}
|
||||
self.do_open()
|
||||
|
||||
def do_open(self, rev='HEAD'):
|
||||
self.fs = SvnRepositoryFilesystem(svnurl, rev)
|
||||
|
||||
def do_commit(self, msg):
|
||||
rev = self.fs.commit(msg)
|
||||
if rev is None:
|
||||
print '* no changes.'
|
||||
else:
|
||||
print '* checked in revision %d.' % (rev,)
|
||||
self.do_open()
|
||||
|
||||
def do_status(self, path=''):
|
||||
print '* status'
|
||||
result = []
|
||||
if path and not path.endswith('/'):
|
||||
path += '/'
|
||||
for delta in self.fs._compute_deltas():
|
||||
if delta.path.startswith(path):
|
||||
if delta.oldrev is None:
|
||||
c = 'A'
|
||||
elif delta.newrev is None:
|
||||
c = 'D'
|
||||
else:
|
||||
c = 'M'
|
||||
result.append(' %s %s\n' % (c, delta.path[len(path):]))
|
||||
return ''.join(result)
|
||||
|
||||
def getattr(self, path):
|
||||
stat = self.fs.stat(path)
|
||||
if stat['svn:entry:kind'] == 'dir':
|
||||
s = S_IFDIR
|
||||
mode = 0777
|
||||
else:
|
||||
s = S_IFREG
|
||||
mode = 0666
|
||||
try:
|
||||
time = pysvn.date.decode(stat['svn:entry:committed-date'])
|
||||
except KeyError:
|
||||
try:
|
||||
time = self.creationtimes[path]
|
||||
except KeyError:
|
||||
time = self.creationtimes[path] = now()
|
||||
return self.mkattr(path,
|
||||
size = stat.get('svn:entry:size', 0),
|
||||
st_kind = s,
|
||||
mode = mode,
|
||||
time = time)
|
||||
|
||||
def setattr(self, path, mode, uid, gid, size, atime, mtime):
|
||||
if size is not None:
|
||||
data = self.fs.read(path)
|
||||
if size < len(data):
|
||||
self.fs.write(path, data[:size])
|
||||
elif size > len(data):
|
||||
self.fs.write(path, data + '\x00' * (size - len(data)))
|
||||
|
||||
def listdir(self, path):
|
||||
for name in self.fs.listdir(path):
|
||||
kind = self.fs.check_path(posixpath.join(path, name))
|
||||
if kind == 'dir':
|
||||
yield name, TYPE_DIR
|
||||
else:
|
||||
yield name, TYPE_REG
|
||||
|
||||
def check_path(self, path):
|
||||
kind = self.fs.check_path(path)
|
||||
return kind is not None
|
||||
|
||||
def open(self, path, mode):
|
||||
try:
|
||||
of = self.openfiles[path]
|
||||
except KeyError:
|
||||
of = self.openfiles[path] = OpenFile(self.fs.read(path))
|
||||
return of, FOPEN_KEEP_CACHE
|
||||
|
||||
def modified(self, path):
|
||||
try:
|
||||
of = self.openfiles[path]
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
self.fs.write(path, of.f.getvalue())
|
||||
|
||||
def mknod_path(self, path, mode):
|
||||
self.fs.add(path)
|
||||
|
||||
def mkdir_path(self, path, mode):
|
||||
self.fs.mkdir(path)
|
||||
|
||||
def unlink_path(self, path):
|
||||
self.fs.unlink(path)
|
||||
|
||||
def rmdir_path(self, path):
|
||||
self.fs.rmdir(path)
|
||||
|
||||
def rename_path(self, oldpath, newpath):
|
||||
kind = self.fs.check_path(oldpath)
|
||||
if kind is None:
|
||||
return False
|
||||
self.fs.move(oldpath, newpath, kind)
|
||||
return True
|
||||
|
||||
def getxattrs(self, path):
|
||||
return XAttrs(self, path)
|
||||
|
||||
|
||||
class OpenFile:
|
||||
def __init__(self, data=''):
|
||||
self.f = StringIO()
|
||||
self.f.write(data)
|
||||
self.f.seek(0)
|
||||
|
||||
def seek(self, pos):
|
||||
self.f.seek(pos)
|
||||
|
||||
def read(self, sz):
|
||||
return self.f.read(sz)
|
||||
|
||||
def write(self, buf):
|
||||
self.f.write(buf)
|
||||
|
||||
|
||||
class XAttrs:
|
||||
def __init__(self, svnfs, path):
|
||||
self.svnfs = svnfs
|
||||
self.path = path
|
||||
|
||||
def keys(self):
|
||||
return []
|
||||
|
||||
def __getitem__(self, key):
|
||||
if key == 'status':
|
||||
return self.svnfs.do_status(self.path)
|
||||
raise KeyError(key)
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
if key == 'commit' and self.path == '':
|
||||
self.svnfs.do_commit(value)
|
||||
elif key == 'update' and self.path == '':
|
||||
if self.svnfs.fs.modified():
|
||||
raise IOError(errno.EPERM, "there are local changes")
|
||||
if value == '':
|
||||
rev = 'HEAD'
|
||||
else:
|
||||
try:
|
||||
rev = int(value)
|
||||
except ValueError:
|
||||
raise IOError(errno.EPERM, "invalid revision number")
|
||||
self.svnfs.do_open(rev)
|
||||
else:
|
||||
raise KeyError(key)
|
||||
|
||||
def __delitem__(self, key):
|
||||
raise KeyError(key)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
svnurl, mountpoint = sys.argv[1:]
|
||||
handler = Handler(mountpoint, SvnFS(svnurl))
|
||||
handler.loop_forever()
|
@ -1,142 +0,0 @@
|
||||
"""
|
||||
A read-only svn fs showing all the revisions in subdirectories.
|
||||
"""
|
||||
from objectfs import ObjectFs, SymLink
|
||||
from handler import Handler
|
||||
from pysvn.ra import connect
|
||||
from pysvn.date import decode
|
||||
import errno, posixpath, time
|
||||
|
||||
|
||||
#USE_SYMLINKS = 0 # they are wrong if the original file had another path
|
||||
|
||||
# use getfattr -d filename to see the node's attributes, which include
|
||||
# information like the revision at which the file was last modified
|
||||
|
||||
|
||||
class Root:
|
||||
def __init__(self, svnurl):
|
||||
self.svnurl = svnurl
|
||||
self.ra = connect(svnurl)
|
||||
self.head = self.ra.get_latest_rev()
|
||||
|
||||
def listdir(self):
|
||||
for rev in range(1, self.head+1):
|
||||
yield str(rev)
|
||||
yield 'HEAD'
|
||||
|
||||
def join(self, name):
|
||||
try:
|
||||
rev = int(name)
|
||||
except ValueError:
|
||||
if name == 'HEAD':
|
||||
return SymLink(str(self.head))
|
||||
else:
|
||||
raise KeyError(name)
|
||||
return TopLevelDir(self.ra, rev, rev, '')
|
||||
|
||||
|
||||
class Node:
|
||||
def __init__(self, ra, rev, last_changed_rev, path):
|
||||
self.ra = ra
|
||||
self.rev = rev
|
||||
self.last_changed_rev = last_changed_rev
|
||||
self.path = path
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s %d/%s>' % (self.__class__.__name__, self.rev, self.path)
|
||||
|
||||
class Dir(Node):
|
||||
def listdir(self):
|
||||
rev, props, entries = self.ra.get_dir(self.path, self.rev,
|
||||
want_props = False)
|
||||
for key, stats in entries.items():
|
||||
yield key, getnode(self.ra, self.rev,
|
||||
posixpath.join(self.path, key), stats)
|
||||
|
||||
class File(Node):
|
||||
def __init__(self, ra, rev, last_changed_rev, path, size):
|
||||
Node.__init__(self, ra, rev, last_changed_rev, path)
|
||||
self.filesize = size
|
||||
|
||||
def size(self):
|
||||
return self.filesize
|
||||
|
||||
def read(self):
|
||||
checksum, rev, props, data = self.ra.get_file(self.path, self.rev,
|
||||
want_props = False)
|
||||
return data
|
||||
|
||||
|
||||
class TopLevelDir(Dir):
|
||||
def listdir(self):
|
||||
for item in Dir.listdir(self):
|
||||
yield item
|
||||
yield 'svn:log', Log(self.ra, self.rev)
|
||||
|
||||
class Log:
|
||||
|
||||
def __init__(self, ra, rev):
|
||||
self.ra = ra
|
||||
self.rev = rev
|
||||
|
||||
def getlogentry(self):
|
||||
try:
|
||||
return self.logentry
|
||||
except AttributeError:
|
||||
logentries = self.ra.log('', startrev=self.rev, endrev=self.rev)
|
||||
try:
|
||||
[self.logentry] = logentries
|
||||
except ValueError:
|
||||
self.logentry = None
|
||||
return self.logentry
|
||||
|
||||
def size(self):
|
||||
return len(self.read())
|
||||
|
||||
def read(self):
|
||||
logentry = self.getlogentry()
|
||||
if logentry is None:
|
||||
return 'r%d | (no change here)\n' % (self.rev,)
|
||||
datetuple = time.gmtime(decode(logentry.date))
|
||||
date = time.strftime("%c", datetuple)
|
||||
return 'r%d | %s | %s\n\n%s' % (self.rev,
|
||||
logentry.author,
|
||||
date,
|
||||
logentry.message)
|
||||
|
||||
|
||||
if 0:
|
||||
pass
|
||||
##if USE_SYMLINKS:
|
||||
## def getnode(ra, rev, path, stats):
|
||||
## committed_rev = stats['svn:entry:committed-rev']
|
||||
## if committed_rev == rev:
|
||||
## kind = stats['svn:entry:kind']
|
||||
## if kind == 'file':
|
||||
## return File(ra, rev, path, stats['svn:entry:size'])
|
||||
## elif kind == 'dir':
|
||||
## return Dir(ra, rev, path)
|
||||
## else:
|
||||
## raise IOError(errno.EINVAL, "kind %r" % (kind,))
|
||||
## else:
|
||||
## depth = path.count('/')
|
||||
## return SymLink('../' * depth + '../%d/%s' % (committed_rev, path))
|
||||
else:
|
||||
def getnode(ra, rev, path, stats):
|
||||
last_changed_rev = stats['svn:entry:committed-rev']
|
||||
kind = stats['svn:entry:kind']
|
||||
if kind == 'file':
|
||||
return File(ra, rev, last_changed_rev, path,
|
||||
stats['svn:entry:size'])
|
||||
elif kind == 'dir':
|
||||
return Dir(ra, rev, last_changed_rev, path)
|
||||
else:
|
||||
raise IOError(errno.EINVAL, "kind %r" % (kind,))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
svnurl, mountpoint = sys.argv[1:]
|
||||
handler = Handler(mountpoint, ObjectFs(Root(svnurl)))
|
||||
handler.loop_forever()
|
@ -1,305 +0,0 @@
|
||||
from kernel import *
|
||||
import stat, errno, os, time
|
||||
from cStringIO import StringIO
|
||||
from OrderedDict import OrderedDict
|
||||
|
||||
INFINITE = 86400.0
|
||||
|
||||
|
||||
class Wrapper(object):
|
||||
def __init__(self, obj):
|
||||
self.obj = obj
|
||||
|
||||
def getuid(self):
|
||||
return uid(self.obj)
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.obj)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.obj == other
|
||||
|
||||
def __ne__(self, other):
|
||||
return self.obj != other
|
||||
|
||||
|
||||
class BaseDir(object):
|
||||
|
||||
def join(self, name):
|
||||
"Return a file or subdirectory object"
|
||||
for item in self.listdir():
|
||||
if isinstance(item, tuple):
|
||||
subname, subnode = item
|
||||
if subname == name:
|
||||
return subnode
|
||||
raise KeyError(name)
|
||||
|
||||
def listdir(self):
|
||||
"Return a list of names, or a list of (name, object)"
|
||||
raise NotImplementedError
|
||||
|
||||
def create(self, name):
|
||||
"Create a file"
|
||||
raise NotImplementedError
|
||||
|
||||
def mkdir(self, name):
|
||||
"Create a subdirectory"
|
||||
raise NotImplementedError
|
||||
|
||||
def symlink(self, name, target):
|
||||
"Create a symbolic link"
|
||||
raise NotImplementedError
|
||||
|
||||
def unlink(self, name):
|
||||
"Remove a file or subdirectory."
|
||||
raise NotImplementedError
|
||||
|
||||
def rename(self, newname, olddirnode, oldname):
|
||||
"Move another node into this directory."
|
||||
raise NotImplementedError
|
||||
|
||||
def getuid(self):
|
||||
return uid(self)
|
||||
|
||||
def getattr(self, fs):
|
||||
return fs.newattr(stat.S_IFDIR, self.getuid(), mode=0777), INFINITE
|
||||
|
||||
def setattr(self, **kwds):
|
||||
pass
|
||||
|
||||
def getentries(self):
|
||||
entries = OrderedDict()
|
||||
for name in self.listdir():
|
||||
if isinstance(name, tuple):
|
||||
name, subnode = name
|
||||
else:
|
||||
subnode = None
|
||||
entries[name] = subnode
|
||||
return entries
|
||||
|
||||
|
||||
class BaseFile(object):
|
||||
|
||||
def size(self):
|
||||
"Return the size of the file, or None if not known yet"
|
||||
f = self.open()
|
||||
if isinstance(f, str):
|
||||
return len(f)
|
||||
f.seek(0, 2)
|
||||
return f.tell()
|
||||
|
||||
def open(self):
|
||||
"Return the content as a string or a file-like object"
|
||||
raise NotImplementedError
|
||||
|
||||
def getuid(self):
|
||||
return uid(self)
|
||||
|
||||
def getattr(self, fs):
|
||||
sz = self.size()
|
||||
attr = fs.newattr(stat.S_IFREG, self.getuid())
|
||||
if sz is None:
|
||||
timeout = 0
|
||||
else:
|
||||
attr.size = sz
|
||||
timeout = INFINITE
|
||||
return attr, timeout
|
||||
|
||||
def setattr(self, size, **kwds):
|
||||
f = self.open()
|
||||
if self.size() == size:
|
||||
return
|
||||
if isinstance(f, str):
|
||||
raise IOError(errno.EPERM)
|
||||
f.seek(size)
|
||||
f.truncate()
|
||||
|
||||
|
||||
class BaseSymLink(object):
|
||||
|
||||
def readlink(self):
|
||||
"Return the symlink's target, as a string"
|
||||
raise NotImplementedError
|
||||
|
||||
def getuid(self):
|
||||
return uid(self)
|
||||
|
||||
def getattr(self, fs):
|
||||
target = self.readlink()
|
||||
attr = fs.newattr(stat.S_IFLNK, self.getuid())
|
||||
attr.size = len(target)
|
||||
attr.mode |= 0777
|
||||
return attr, INFINITE
|
||||
|
||||
def setattr(self, **kwds):
|
||||
pass
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
class Dir(BaseDir):
|
||||
def __init__(self, **contents):
|
||||
self.contents = contents
|
||||
def listdir(self):
|
||||
return self.contents.items()
|
||||
def join(self, name):
|
||||
return self.contents[name]
|
||||
def create(self, fs, name):
|
||||
node = fs.File()
|
||||
self.contents[name] = node
|
||||
return node
|
||||
def mkdir(self, fs, name):
|
||||
node = fs.Dir()
|
||||
self.contents[name] = node
|
||||
return node
|
||||
def symlink(self, fs, name, target):
|
||||
node = fs.SymLink(target)
|
||||
self.contents[name] = node
|
||||
return node
|
||||
def unlink(self, name):
|
||||
del self.contents[name]
|
||||
def rename(self, newname, olddirnode, oldname):
|
||||
oldnode = olddirnode.join(oldname)
|
||||
olddirnode.unlink(oldname)
|
||||
self.contents[newname] = oldnode
|
||||
|
||||
class File(BaseFile):
|
||||
def __init__(self):
|
||||
self.data = StringIO()
|
||||
def size(self):
|
||||
self.data.seek(0, 2)
|
||||
return self.data.tell()
|
||||
def open(self):
|
||||
return self.data
|
||||
|
||||
class SymLink(BaseFile):
|
||||
def __init__(self, target):
|
||||
self.target = target
|
||||
def readlink(self):
|
||||
return self.target
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
|
||||
class RWObjectFs(object):
|
||||
"""A simple read-write file system based on Python objects."""
|
||||
|
||||
UID = os.getuid()
|
||||
GID = os.getgid()
|
||||
UMASK = os.umask(0); os.umask(UMASK)
|
||||
|
||||
Dir = Dir
|
||||
File = File
|
||||
SymLink = SymLink
|
||||
|
||||
def __init__(self, rootnode):
|
||||
self.nodes = {FUSE_ROOT_ID: rootnode}
|
||||
self.starttime = time.time()
|
||||
|
||||
def newattr(self, s, ino, mode=0666):
|
||||
return fuse_attr(ino = ino,
|
||||
size = 0,
|
||||
mode = s | (mode & ~self.UMASK),
|
||||
nlink = 1, # even on dirs! this confuses 'find' in
|
||||
# a good way :-)
|
||||
atime = self.starttime,
|
||||
mtime = self.starttime,
|
||||
ctime = self.starttime,
|
||||
uid = self.UID,
|
||||
gid = self.GID)
|
||||
|
||||
def getnode(self, nodeid):
|
||||
try:
|
||||
return self.nodes[nodeid]
|
||||
except KeyError:
|
||||
raise IOError(errno.ESTALE, nodeid)
|
||||
|
||||
def getattr(self, node):
|
||||
return node.getattr(self)
|
||||
|
||||
def setattr(self, node, mode, uid, gid, size, atime, mtime):
|
||||
node.setattr(mode=mode, uid=uid, gid=gid, size=size,
|
||||
atime=atime, mtime=mtime)
|
||||
|
||||
def listdir(self, node):
|
||||
entries = node.getentries()
|
||||
for name, subnode in entries.items():
|
||||
if subnode is None:
|
||||
subnode = node.join(name)
|
||||
self.nodes[uid(subnode)] = subnode
|
||||
entries[name] = subnode
|
||||
if isinstance(subnode, str):
|
||||
yield name, TYPE_REG
|
||||
elif hasattr(subnode, 'readlink'):
|
||||
yield name, TYPE_LNK
|
||||
elif hasattr(subnode, 'size'):
|
||||
yield name, TYPE_REG
|
||||
else:
|
||||
yield name, TYPE_DIR
|
||||
|
||||
def lookup(self, node, name):
|
||||
try:
|
||||
subnode = node.join(name)
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, name)
|
||||
else:
|
||||
res = uid(subnode)
|
||||
self.nodes[res] = subnode
|
||||
return res, INFINITE
|
||||
|
||||
def mknod(self, dirnode, filename, mode):
|
||||
node = dirnode.create(filename)
|
||||
return self.newnodeid(node), INFINITE
|
||||
|
||||
def mkdir(self, dirnode, subdirname, mode):
|
||||
node = dirnode.mkdir(subdirname)
|
||||
return self.newnodeid(node), INFINITE
|
||||
|
||||
def symlink(self, dirnode, linkname, target):
|
||||
node = dirnode.symlink(linkname, target)
|
||||
return self.newnodeid(node), INFINITE
|
||||
|
||||
def unlink(self, dirnode, filename):
|
||||
try:
|
||||
dirnode.unlink(filename)
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, filename)
|
||||
|
||||
rmdir = unlink
|
||||
|
||||
def open(self, node, mode):
|
||||
f = node.open()
|
||||
if isinstance(f, str):
|
||||
f = StringIO(f)
|
||||
return f
|
||||
|
||||
def readlink(self, node):
|
||||
return node.readlink()
|
||||
|
||||
def rename(self, olddirnode, oldname, newdirnode, newname):
|
||||
try:
|
||||
newdirnode.rename(newname, olddirnode, oldname)
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, oldname)
|
||||
|
||||
def getxattrs(self, node):
|
||||
return getattr(node, '__dict__', {})
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
import struct
|
||||
try:
|
||||
HUGEVAL = 256 ** struct.calcsize('P')
|
||||
except struct.error:
|
||||
HUGEVAL = 0
|
||||
|
||||
def fixid(result):
|
||||
if result < 0:
|
||||
result += HUGEVAL
|
||||
return result
|
||||
|
||||
def uid(obj):
|
||||
"""
|
||||
Return the id of an object as an unsigned number so that its hex
|
||||
representation makes sense
|
||||
"""
|
||||
return fixid(id(obj))
|
@ -1,42 +0,0 @@
|
||||
import py
|
||||
from handler import Handler
|
||||
from objectfs import ObjectFs
|
||||
|
||||
|
||||
class SvnDir:
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
|
||||
def listdir(self):
|
||||
for p in self.path.listdir():
|
||||
if p.check(dir=1):
|
||||
cls = SvnDir
|
||||
else:
|
||||
cls = SvnFile
|
||||
yield p.basename, cls(p)
|
||||
|
||||
|
||||
class SvnFile:
|
||||
data = None
|
||||
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
|
||||
def size(self):
|
||||
if self.data is None:
|
||||
return None
|
||||
else:
|
||||
return len(self.data)
|
||||
|
||||
def read(self):
|
||||
if self.data is None:
|
||||
self.data = self.path.read()
|
||||
return self.data
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
svnurl, mountpoint = sys.argv[1:]
|
||||
root = SvnDir(py.path.svnurl(svnurl))
|
||||
handler = Handler(mountpoint, ObjectFs(root))
|
||||
handler.loop_forever()
|
@ -1,115 +0,0 @@
|
||||
"""
|
||||
PyFuse client for the Tahoe distributed file system.
|
||||
See http://allmydata.org/
|
||||
"""
|
||||
|
||||
# Read-only for now.
|
||||
|
||||
# Portions copied from the file contrib/fuse/tahoe_fuse.py distributed
|
||||
# with Tahoe 1.0.0.
|
||||
|
||||
import os, sys
|
||||
from objectfs import ObjectFs
|
||||
from handler import Handler
|
||||
import simplejson
|
||||
import urllib
|
||||
|
||||
|
||||
### Config:
|
||||
TahoeConfigDir = '~/.tahoe'
|
||||
|
||||
|
||||
### Utilities for debug:
|
||||
def log(msg, *args):
|
||||
print msg % args
|
||||
|
||||
|
||||
class TahoeConnection:
|
||||
def __init__(self, confdir):
|
||||
self.confdir = confdir
|
||||
self._init_url()
|
||||
|
||||
def _init_url(self):
|
||||
if os.path.exists(os.path.join(self.confdir, 'node.url')):
|
||||
self.url = file(os.path.join(self.confdir, 'node.url'), 'rb').read().strip()
|
||||
if not self.url.endswith('/'):
|
||||
self.url += '/'
|
||||
else:
|
||||
f = open(os.path.join(self.confdir, 'webport'), 'r')
|
||||
contents = f.read()
|
||||
f.close()
|
||||
fields = contents.split(':')
|
||||
proto, port = fields[:2]
|
||||
assert proto == 'tcp'
|
||||
port = int(port)
|
||||
self.url = 'http://localhost:%d/' % (port,)
|
||||
|
||||
def get_root(self):
|
||||
# For now we just use the same default as the CLI:
|
||||
rootdirfn = os.path.join(self.confdir, 'private', 'root_dir.cap')
|
||||
f = open(rootdirfn, 'r')
|
||||
cap = f.read().strip()
|
||||
f.close()
|
||||
return TahoeDir(self, canonicalize_cap(cap))
|
||||
|
||||
|
||||
class TahoeNode:
|
||||
def __init__(self, conn, uri):
|
||||
self.conn = conn
|
||||
self.uri = uri
|
||||
|
||||
def get_metadata(self):
|
||||
f = self._open('?t=json')
|
||||
json = f.read()
|
||||
f.close()
|
||||
return simplejson.loads(json)
|
||||
|
||||
def _open(self, postfix=''):
|
||||
url = '%suri/%s%s' % (self.conn.url, self.uri, postfix)
|
||||
log('*** Fetching: %r', url)
|
||||
return urllib.urlopen(url)
|
||||
|
||||
|
||||
class TahoeDir(TahoeNode):
|
||||
def listdir(self):
|
||||
flag, md = self.get_metadata()
|
||||
assert flag == 'dirnode'
|
||||
result = []
|
||||
for name, (childflag, childmd) in md['children'].items():
|
||||
if childflag == 'dirnode':
|
||||
cls = TahoeDir
|
||||
else:
|
||||
cls = TahoeFile
|
||||
result.append((str(name), cls(self.conn, childmd['ro_uri'])))
|
||||
return result
|
||||
|
||||
class TahoeFile(TahoeNode):
|
||||
def size(self):
|
||||
rawsize = self.get_metadata()[1]['size']
|
||||
return rawsize
|
||||
|
||||
def read(self):
|
||||
return self._open().read()
|
||||
|
||||
|
||||
def canonicalize_cap(cap):
|
||||
cap = urllib.unquote(cap)
|
||||
i = cap.find('URI:')
|
||||
assert i != -1, 'A cap must contain "URI:...", but this does not: ' + cap
|
||||
return cap[i:]
|
||||
|
||||
def main(mountpoint, basedir):
|
||||
conn = TahoeConnection(basedir)
|
||||
root = conn.get_root()
|
||||
handler = Handler(mountpoint, ObjectFs(root))
|
||||
handler.loop_forever()
|
||||
|
||||
if __name__ == '__main__':
|
||||
basedir = os.path.expanduser(TahoeConfigDir)
|
||||
for i, arg in enumerate(sys.argv):
|
||||
if arg == '--basedir':
|
||||
basedir = sys.argv[i+1]
|
||||
sys.argv[i:i+2] = []
|
||||
|
||||
[mountpoint] = sys.argv[1:]
|
||||
main(mountpoint, basedir)
|
@ -1,172 +0,0 @@
|
||||
from handler import Handler
|
||||
import stat, errno, os, time
|
||||
from cStringIO import StringIO
|
||||
from kernel import *
|
||||
|
||||
|
||||
UID = os.getuid()
|
||||
GID = os.getgid()
|
||||
UMASK = os.umask(0); os.umask(UMASK)
|
||||
INFINITE = 86400.0
|
||||
|
||||
|
||||
class Node(object):
|
||||
__slots__ = ['attr', 'data']
|
||||
|
||||
def __init__(self, attr, data=None):
|
||||
self.attr = attr
|
||||
self.data = data
|
||||
|
||||
def type(self):
|
||||
return mode2type(self.attr.mode)
|
||||
|
||||
def modified(self):
|
||||
self.attr.mtime = self.attr.atime = time.time()
|
||||
t = self.type()
|
||||
if t == TYPE_REG:
|
||||
f = self.data
|
||||
pos = f.tell()
|
||||
f.seek(0, 2)
|
||||
self.attr.size = f.tell()
|
||||
f.seek(pos)
|
||||
elif t == TYPE_DIR:
|
||||
nsubdirs = 0
|
||||
for nodeid in self.data.values():
|
||||
nsubdirs += nodeid & 1
|
||||
self.attr.nlink = 2 + nsubdirs
|
||||
|
||||
|
||||
def newattr(s, mode=0666):
|
||||
now = time.time()
|
||||
return fuse_attr(ino = INVALID_INO,
|
||||
size = 0,
|
||||
mode = s | (mode & ~UMASK),
|
||||
nlink = 1 + (s == stat.S_IFDIR),
|
||||
atime = now,
|
||||
mtime = now,
|
||||
ctime = now,
|
||||
uid = UID,
|
||||
gid = GID)
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
class Filesystem:
|
||||
|
||||
def __init__(self, rootnode):
|
||||
self.nodes = {FUSE_ROOT_ID: rootnode}
|
||||
self.nextid = 2
|
||||
assert self.nextid > FUSE_ROOT_ID
|
||||
|
||||
def getnode(self, nodeid):
|
||||
try:
|
||||
return self.nodes[nodeid]
|
||||
except KeyError:
|
||||
raise IOError(errno.ESTALE, nodeid)
|
||||
|
||||
def forget(self, nodeid):
|
||||
pass
|
||||
|
||||
def cachenode(self, node):
|
||||
id = self.nextid
|
||||
self.nextid += 2
|
||||
if node.type() == TYPE_DIR:
|
||||
id += 1
|
||||
self.nodes[id] = node
|
||||
return id
|
||||
|
||||
def getattr(self, node):
|
||||
return node.attr, INFINITE
|
||||
|
||||
def setattr(self, node, mode=None, uid=None, gid=None,
|
||||
size=None, atime=None, mtime=None):
|
||||
if mode is not None: node.attr.mode = (node.attr.mode&~0777) | mode
|
||||
if uid is not None: node.attr.uid = uid
|
||||
if gid is not None: node.attr.gid = gid
|
||||
if atime is not None: node.attr.atime = atime
|
||||
if mtime is not None: node.attr.mtime = mtime
|
||||
if size is not None and node.type() == TYPE_REG:
|
||||
node.data.seek(size)
|
||||
node.data.truncate()
|
||||
|
||||
def listdir(self, node):
|
||||
for name, subnodeid in node.data.items():
|
||||
subnode = self.nodes[subnodeid]
|
||||
yield name, subnode.type()
|
||||
|
||||
def lookup(self, node, name):
|
||||
try:
|
||||
return node.data[name], INFINITE
|
||||
except KeyError:
|
||||
pass
|
||||
if hasattr(node, 'findnode'):
|
||||
try:
|
||||
subnode = node.findnode(name)
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
id = self.cachenode(subnode)
|
||||
node.data[name] = id
|
||||
return id, INFINITE
|
||||
raise IOError(errno.ENOENT, name)
|
||||
|
||||
def open(self, node, mode):
|
||||
return node.data
|
||||
|
||||
def mknod(self, node, name, mode):
|
||||
subnode = Node(newattr(mode & 0170000, mode & 0777))
|
||||
if subnode.type() == TYPE_REG:
|
||||
subnode.data = StringIO()
|
||||
else:
|
||||
raise NotImplementedError
|
||||
id = self.cachenode(subnode)
|
||||
node.data[name] = id
|
||||
node.modified()
|
||||
return id, INFINITE
|
||||
|
||||
def mkdir(self, node, name, mode):
|
||||
subnode = Node(newattr(stat.S_IFDIR, mode & 0777), {})
|
||||
id = self.cachenode(subnode)
|
||||
node.data[name] = id
|
||||
node.modified()
|
||||
return id, INFINITE
|
||||
|
||||
def symlink(self, node, linkname, target):
|
||||
subnode = Node(newattr(stat.S_IFLNK, 0777), target)
|
||||
id = self.cachenode(subnode)
|
||||
node.data[linkname] = id
|
||||
node.modified()
|
||||
return id, INFINITE
|
||||
|
||||
def readlink(self, node):
|
||||
assert node.type() == TYPE_LNK
|
||||
return node.data
|
||||
|
||||
def unlink(self, node, name):
|
||||
try:
|
||||
del node.data[name]
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, name)
|
||||
node.modified()
|
||||
|
||||
rmdir = unlink
|
||||
|
||||
def rename(self, oldnode, oldname, newnode, newname):
|
||||
if newnode.type() != TYPE_DIR:
|
||||
raise IOError(errno.ENOTDIR, newnode)
|
||||
try:
|
||||
nodeid = oldnode.data.pop(oldname)
|
||||
except KeyError:
|
||||
raise IOError(errno.ENOENT, oldname)
|
||||
oldnode.modified()
|
||||
newnode.data[newname] = nodeid
|
||||
newnode.modified()
|
||||
|
||||
def modified(self, node):
|
||||
node.modified()
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
if __name__ == '__main__':
|
||||
root = Node(newattr(stat.S_IFDIR), {})
|
||||
handler = Handler('/home/arigo/mnt', Filesystem(root))
|
||||
handler.loop_forever()
|
File diff suppressed because it is too large
Load Diff
@ -1,908 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
'''
|
||||
Unit and system tests for tahoe-fuse.
|
||||
'''
|
||||
|
||||
# Note: It's always a SetupFailure, not a TestFailure if a webapi
|
||||
# operation fails, because this does not indicate a fuse interface
|
||||
# failure.
|
||||
|
||||
# TODO: Unmount after tests regardless of failure or success!
|
||||
|
||||
# TODO: Test mismatches between tahoe and fuse/posix. What about nodes
|
||||
# with crazy names ('\0', unicode, '/', '..')? Huuuuge files?
|
||||
# Huuuuge directories... As tahoe approaches production quality, it'd
|
||||
# be nice if the fuse interface did so also by hardening against such cases.
|
||||
|
||||
# FIXME: Only create / launch necessary nodes. Do we still need an introducer and three nodes?
|
||||
|
||||
# FIXME: This framework might be replaceable with twisted.trial,
|
||||
# especially the "layer" design, which is a bit cumbersome when
|
||||
# using recursion to manage multiple clients.
|
||||
|
||||
# FIXME: Identify all race conditions (hint: starting clients, versus
|
||||
# using the grid fs).
|
||||
|
||||
import sys, os, shutil, unittest, subprocess
|
||||
import tempfile, re, time, random, httplib, urllib
|
||||
#import traceback
|
||||
|
||||
from twisted.python import usage
|
||||
|
||||
if sys.platform.startswith('darwin'):
|
||||
UNMOUNT_CMD = ['umount']
|
||||
else:
|
||||
# linux, and until we hear otherwise, all other platforms with fuse, by assumption
|
||||
UNMOUNT_CMD = ['fusermount', '-u']
|
||||
|
||||
# Import fuse implementations:
|
||||
#FuseDir = os.path.join('.', 'contrib', 'fuse')
|
||||
#if not os.path.isdir(FuseDir):
|
||||
# raise SystemExit('''
|
||||
#Could not find directory "%s". Please run this script from the tahoe
|
||||
#source base directory.
|
||||
#''' % (FuseDir,))
|
||||
FuseDir = '.'
|
||||
|
||||
|
||||
### Load each implementation
|
||||
sys.path.append(os.path.join(FuseDir, 'impl_a'))
|
||||
import tahoe_fuse as impl_a
|
||||
sys.path.append(os.path.join(FuseDir, 'impl_b'))
|
||||
import pyfuse.tahoe as impl_b
|
||||
sys.path.append(os.path.join(FuseDir, 'impl_c'))
|
||||
import blackmatch as impl_c
|
||||
|
||||
### config info about each impl, including which make sense to run
|
||||
implementations = {
|
||||
'impl_a': dict(module=impl_a,
|
||||
mount_args=['--basedir', '%(nodedir)s', '%(mountpath)s', ],
|
||||
mount_wait=True,
|
||||
suites=['read', ]),
|
||||
'impl_b': dict(module=impl_b,
|
||||
todo=True,
|
||||
mount_args=['--basedir', '%(nodedir)s', '%(mountpath)s', ],
|
||||
mount_wait=False,
|
||||
suites=['read', ]),
|
||||
'impl_c': dict(module=impl_c,
|
||||
mount_args=['--cache-timeout', '0', '--root-uri', '%(root-uri)s',
|
||||
'--node-directory', '%(nodedir)s', '%(mountpath)s', ],
|
||||
mount_wait=True,
|
||||
suites=['read', 'write', ]),
|
||||
'impl_c_no_split': dict(module=impl_c,
|
||||
mount_args=['--cache-timeout', '0', '--root-uri', '%(root-uri)s',
|
||||
'--no-split',
|
||||
'--node-directory', '%(nodedir)s', '%(mountpath)s', ],
|
||||
mount_wait=True,
|
||||
suites=['read', 'write', ]),
|
||||
}
|
||||
|
||||
if sys.platform == 'darwin':
|
||||
del implementations['impl_a']
|
||||
del implementations['impl_b']
|
||||
|
||||
default_catch_up_pause = 0
|
||||
if sys.platform == 'linux2':
|
||||
default_catch_up_pause = 2
|
||||
|
||||
class FuseTestsOptions(usage.Options):
|
||||
optParameters = [
|
||||
["test-type", None, "both",
|
||||
"Type of test to run; unit, system or both"
|
||||
],
|
||||
["implementations", None, "all",
|
||||
"Comma separated list of implementations to test, or 'all'"
|
||||
],
|
||||
["suites", None, "all",
|
||||
"Comma separated list of test suites to run, or 'all'"
|
||||
],
|
||||
["tests", None, None,
|
||||
"Comma separated list of specific tests to run"
|
||||
],
|
||||
["path-to-tahoe", None, "../../bin/tahoe",
|
||||
"Which 'tahoe' script to use to create test nodes"],
|
||||
["tmp-dir", None, "/tmp",
|
||||
"Where the test should create temporary files"],
|
||||
# Note; this is '/tmp' because on leopard, tempfile.mkdtemp creates
|
||||
# directories in a location which leads paths to exceed what macfuse
|
||||
# can handle without leaking un-umount-able fuse processes.
|
||||
["catch-up-pause", None, str(default_catch_up_pause),
|
||||
"Pause between tahoe operations and fuse tests thereon"],
|
||||
]
|
||||
optFlags = [
|
||||
["debug-wait", None,
|
||||
"Causes the test system to pause at various points, to facilitate debugging"],
|
||||
["web-open", None,
|
||||
"Opens a web browser to the web ui at the start of each impl's tests"],
|
||||
["no-cleanup", False,
|
||||
"Prevents the cleanup of the working directories, to allow analysis thereof"],
|
||||
]
|
||||
|
||||
def postOptions(self):
|
||||
if self['suites'] == 'all':
|
||||
self.suites = ['read', 'write']
|
||||
# [ ] todo: deduce this from looking for test_ in dir(self)
|
||||
else:
|
||||
self.suites = map(str.strip, self['suites'].split(','))
|
||||
if self['implementations'] == 'all':
|
||||
self.implementations = implementations.keys()
|
||||
else:
|
||||
self.implementations = map(str.strip, self['implementations'].split(','))
|
||||
if self['tests']:
|
||||
self.tests = map(str.strip, self['tests'].split(','))
|
||||
else:
|
||||
self.tests = None
|
||||
self.catch_up_pause = float(self['catch-up-pause'])
|
||||
|
||||
### Main flow control:
|
||||
def main(args):
|
||||
config = FuseTestsOptions()
|
||||
config.parseOptions(args[1:])
|
||||
|
||||
target = 'all'
|
||||
if len(args) > 1:
|
||||
target = args.pop(1)
|
||||
|
||||
test_type = config['test-type']
|
||||
if test_type not in ('both', 'unit', 'system'):
|
||||
raise usage.error('test-type %r not supported' % (test_type,))
|
||||
|
||||
if test_type in ('both', 'unit'):
|
||||
run_unit_tests([args[0]])
|
||||
|
||||
if test_type in ('both', 'system'):
|
||||
return run_system_test(config)
|
||||
|
||||
|
||||
def run_unit_tests(argv):
|
||||
print 'Running Unit Tests.'
|
||||
try:
|
||||
unittest.main(argv=argv)
|
||||
except SystemExit, se:
|
||||
pass
|
||||
print 'Unit Tests complete.\n'
|
||||
|
||||
|
||||
def run_system_test(config):
|
||||
return SystemTest(config).run()
|
||||
|
||||
def drepr(obj):
|
||||
r = repr(obj)
|
||||
if len(r) > 200:
|
||||
return '%s ... %s [%d]' % (r[:100], r[-100:], len(r))
|
||||
else:
|
||||
return r
|
||||
|
||||
### System Testing:
|
||||
class SystemTest (object):
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
|
||||
# These members represent test state:
|
||||
self.cliexec = None
|
||||
self.testroot = None
|
||||
|
||||
# This test state is specific to the first client:
|
||||
self.port = None
|
||||
self.clientbase = None
|
||||
|
||||
## Top-level flow control:
|
||||
# These "*_layer" methods call each other in a linear fashion, using
|
||||
# exception unwinding to do cleanup properly. Each "layer" invokes
|
||||
# a deeper layer, and each layer does its own cleanup upon exit.
|
||||
|
||||
def run(self):
|
||||
print '\n*** Setting up system tests.'
|
||||
try:
|
||||
results = self.init_cli_layer()
|
||||
print '\n*** System Tests complete:'
|
||||
total_failures = todo_failures = 0
|
||||
for result in results:
|
||||
impl_name, failures, total = result
|
||||
if implementations[impl_name].get('todo'):
|
||||
todo_failures += failures
|
||||
else:
|
||||
total_failures += failures
|
||||
print 'Implementation %s: %d failed out of %d.' % result
|
||||
if total_failures:
|
||||
print '%s total failures, %s todo' % (total_failures, todo_failures)
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
except SetupFailure, sfail:
|
||||
print
|
||||
print sfail
|
||||
print '\n*** System Tests were not successfully completed.'
|
||||
return 1
|
||||
|
||||
def maybe_wait(self, msg='waiting', or_if_webopen=False):
|
||||
if self.config['debug-wait'] or or_if_webopen and self.config['web-open']:
|
||||
print msg
|
||||
raw_input()
|
||||
|
||||
def maybe_webopen(self, where=None):
|
||||
if self.config['web-open']:
|
||||
import webbrowser
|
||||
url = self.weburl
|
||||
if where is not None:
|
||||
url += urllib.quote(where)
|
||||
webbrowser.open(url)
|
||||
|
||||
def maybe_pause(self):
|
||||
time.sleep(self.config.catch_up_pause)
|
||||
|
||||
def init_cli_layer(self):
|
||||
'''This layer finds the appropriate tahoe executable.'''
|
||||
#self.cliexec = os.path.join('.', 'bin', 'tahoe')
|
||||
self.cliexec = self.config['path-to-tahoe']
|
||||
version = self.run_tahoe('--version')
|
||||
print 'Using %r with version:\n%s' % (self.cliexec, version.rstrip())
|
||||
|
||||
return self.create_testroot_layer()
|
||||
|
||||
def create_testroot_layer(self):
|
||||
print 'Creating test base directory.'
|
||||
#self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_')
|
||||
#self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_', dir='/tmp/')
|
||||
tmpdir = self.config['tmp-dir']
|
||||
if tmpdir:
|
||||
self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_', dir=tmpdir)
|
||||
else:
|
||||
self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_')
|
||||
try:
|
||||
return self.launch_introducer_layer()
|
||||
finally:
|
||||
if not self.config['no-cleanup']:
|
||||
print 'Cleaning up test root directory.'
|
||||
try:
|
||||
shutil.rmtree(self.testroot)
|
||||
except Exception, e:
|
||||
print 'Exception removing test root directory: %r' % (self.testroot, )
|
||||
print 'Ignoring cleanup exception: %r' % (e,)
|
||||
else:
|
||||
print 'Leaving test root directory: %r' % (self.testroot, )
|
||||
|
||||
|
||||
def launch_introducer_layer(self):
|
||||
print 'Launching introducer.'
|
||||
introbase = os.path.join(self.testroot, 'introducer')
|
||||
|
||||
# NOTE: We assume if tahoe exits with non-zero status, no separate
|
||||
# tahoe child process is still running.
|
||||
createoutput = self.run_tahoe('create-introducer', '--basedir', introbase)
|
||||
|
||||
self.check_tahoe_output(createoutput, ExpectedCreationOutput, introbase)
|
||||
|
||||
startoutput = self.run_tahoe('start', '--basedir', introbase)
|
||||
try:
|
||||
self.check_tahoe_output(startoutput, ExpectedStartOutput, introbase)
|
||||
|
||||
return self.launch_clients_layer(introbase)
|
||||
|
||||
finally:
|
||||
print 'Stopping introducer node.'
|
||||
self.stop_node(introbase)
|
||||
|
||||
def set_tahoe_option(self, base, key, value):
|
||||
import re
|
||||
|
||||
filename = os.path.join(base, 'tahoe.cfg')
|
||||
content = open(filename).read()
|
||||
content = re.sub('%s = (.+)' % key, '%s = %s' % (key, value), content)
|
||||
open(filename, 'w').write(content)
|
||||
|
||||
TotalClientsNeeded = 3
|
||||
def launch_clients_layer(self, introbase, clientnum = 0):
|
||||
if clientnum >= self.TotalClientsNeeded:
|
||||
self.maybe_wait('waiting (launched clients)')
|
||||
ret = self.create_test_dirnode_layer()
|
||||
self.maybe_wait('waiting (ran tests)', or_if_webopen=True)
|
||||
return ret
|
||||
|
||||
tmpl = 'Launching client %d of %d.'
|
||||
print tmpl % (clientnum,
|
||||
self.TotalClientsNeeded)
|
||||
|
||||
base = os.path.join(self.testroot, 'client_%d' % (clientnum,))
|
||||
|
||||
output = self.run_tahoe('create-node', '--basedir', base)
|
||||
self.check_tahoe_output(output, ExpectedCreationOutput, base)
|
||||
|
||||
if clientnum == 0:
|
||||
# The first client is special:
|
||||
self.clientbase = base
|
||||
self.port = random.randrange(1024, 2**15)
|
||||
|
||||
self.set_tahoe_option(base, 'web.port', 'tcp:%d:interface=127.0.0.1' % self.port)
|
||||
|
||||
self.weburl = "http://127.0.0.1:%d/" % (self.port,)
|
||||
print self.weburl
|
||||
else:
|
||||
self.set_tahoe_option(base, 'web.port', '')
|
||||
|
||||
introfurl = os.path.join(introbase, 'introducer.furl')
|
||||
|
||||
furl = open(introfurl).read().strip()
|
||||
self.set_tahoe_option(base, 'introducer.furl', furl)
|
||||
|
||||
# NOTE: We assume if tahoe exist with non-zero status, no separate
|
||||
# tahoe child process is still running.
|
||||
startoutput = self.run_tahoe('start', '--basedir', base)
|
||||
try:
|
||||
self.check_tahoe_output(startoutput, ExpectedStartOutput, base)
|
||||
|
||||
return self.launch_clients_layer(introbase, clientnum+1)
|
||||
|
||||
finally:
|
||||
print 'Stopping client node %d.' % (clientnum,)
|
||||
self.stop_node(base)
|
||||
|
||||
def create_test_dirnode_layer(self):
|
||||
print 'Creating test dirnode.'
|
||||
|
||||
cap = self.create_dirnode()
|
||||
|
||||
f = open(os.path.join(self.clientbase, 'private', 'root_dir.cap'), 'w')
|
||||
f.write(cap)
|
||||
f.close()
|
||||
|
||||
return self.mount_fuse_layer(cap)
|
||||
|
||||
def mount_fuse_layer(self, root_uri):
|
||||
mpbase = os.path.join(self.testroot, 'mountpoint')
|
||||
os.mkdir(mpbase)
|
||||
results = []
|
||||
|
||||
if self.config['debug-wait']:
|
||||
ImplProcessManager.debug_wait = True
|
||||
|
||||
#for name, kwargs in implementations.items():
|
||||
for name in self.config.implementations:
|
||||
kwargs = implementations[name]
|
||||
#print 'instantiating %s: %r' % (name, kwargs)
|
||||
implprocmgr = ImplProcessManager(name, **kwargs)
|
||||
print '\n*** Testing impl: %r' % (implprocmgr.name)
|
||||
implprocmgr.configure(self.clientbase, mpbase)
|
||||
implprocmgr.mount()
|
||||
try:
|
||||
failures, total = self.run_test_layer(root_uri, implprocmgr)
|
||||
result = (implprocmgr.name, failures, total)
|
||||
tmpl = '\n*** Test Results implementation %s: %d failed out of %d.'
|
||||
print tmpl % result
|
||||
results.append(result)
|
||||
finally:
|
||||
implprocmgr.umount()
|
||||
return results
|
||||
|
||||
def run_test_layer(self, root_uri, iman):
|
||||
self.maybe_webopen('uri/'+root_uri)
|
||||
failures = 0
|
||||
testnum = 0
|
||||
numtests = 0
|
||||
if self.config.tests:
|
||||
tests = self.config.tests
|
||||
else:
|
||||
tests = list(set(self.config.suites).intersection(set(iman.suites)))
|
||||
self.maybe_wait('waiting (about to run tests)')
|
||||
for test in tests:
|
||||
testnames = [n for n in sorted(dir(self)) if n.startswith('test_'+test)]
|
||||
numtests += len(testnames)
|
||||
print 'running %s %r tests' % (len(testnames), test,)
|
||||
for testname in testnames:
|
||||
testnum += 1
|
||||
print '\n*** Running test #%d: %s' % (testnum, testname)
|
||||
try:
|
||||
testcap = self.create_dirnode()
|
||||
dirname = '%s_%s' % (iman.name, testname)
|
||||
self.attach_node(root_uri, testcap, dirname)
|
||||
method = getattr(self, testname)
|
||||
method(testcap, testdir = os.path.join(iman.mountpath, dirname))
|
||||
print 'Test succeeded.'
|
||||
except TestFailure, f:
|
||||
print f
|
||||
#print traceback.format_exc()
|
||||
failures += 1
|
||||
except:
|
||||
print 'Error in test code... Cleaning up.'
|
||||
raise
|
||||
return (failures, numtests)
|
||||
|
||||
# Tests:
|
||||
def test_read_directory_existence(self, testcap, testdir):
|
||||
if not wrap_os_error(os.path.isdir, testdir):
|
||||
raise TestFailure('Attached test directory not found: %r', testdir)
|
||||
|
||||
def test_read_empty_directory_listing(self, testcap, testdir):
|
||||
listing = wrap_os_error(os.listdir, testdir)
|
||||
if listing:
|
||||
raise TestFailure('Expected empty directory, found: %r', listing)
|
||||
|
||||
def test_read_directory_listing(self, testcap, testdir):
|
||||
names = []
|
||||
filesizes = {}
|
||||
|
||||
for i in range(3):
|
||||
fname = 'file_%d' % (i,)
|
||||
names.append(fname)
|
||||
body = 'Hello World #%d!' % (i,)
|
||||
filesizes[fname] = len(body)
|
||||
|
||||
cap = self.webapi_call('PUT', '/uri', body)
|
||||
self.attach_node(testcap, cap, fname)
|
||||
|
||||
dname = 'dir_%d' % (i,)
|
||||
names.append(dname)
|
||||
|
||||
cap = self.create_dirnode()
|
||||
self.attach_node(testcap, cap, dname)
|
||||
|
||||
names.sort()
|
||||
|
||||
listing = wrap_os_error(os.listdir, testdir)
|
||||
listing.sort()
|
||||
|
||||
if listing != names:
|
||||
tmpl = 'Expected directory list containing %r but fuse gave %r'
|
||||
raise TestFailure(tmpl, names, listing)
|
||||
|
||||
for file, size in filesizes.items():
|
||||
st = wrap_os_error(os.stat, os.path.join(testdir, file))
|
||||
if st.st_size != size:
|
||||
tmpl = 'Expected %r size of %r but fuse returned %r'
|
||||
raise TestFailure(tmpl, file, size, st.st_size)
|
||||
|
||||
def test_read_file_contents(self, testcap, testdir):
|
||||
name = 'hw.txt'
|
||||
body = 'Hello World!'
|
||||
|
||||
cap = self.webapi_call('PUT', '/uri', body)
|
||||
self.attach_node(testcap, cap, name)
|
||||
|
||||
path = os.path.join(testdir, name)
|
||||
try:
|
||||
found = open(path, 'r').read()
|
||||
except Exception, err:
|
||||
tmpl = 'Could not read file contents of %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
if found != body:
|
||||
tmpl = 'Expected file contents %r but found %r'
|
||||
raise TestFailure(tmpl, body, found)
|
||||
|
||||
def test_read_in_random_order(self, testcap, testdir):
|
||||
sz = 2**20
|
||||
bs = 2**10
|
||||
assert(sz % bs == 0)
|
||||
name = 'random_read_order'
|
||||
body = os.urandom(sz)
|
||||
|
||||
cap = self.webapi_call('PUT', '/uri', body)
|
||||
self.attach_node(testcap, cap, name)
|
||||
|
||||
# XXX this should also do a test where sz%bs != 0, so that it correctly tests
|
||||
# the edge case where the last read is a 'short' block
|
||||
path = os.path.join(testdir, name)
|
||||
try:
|
||||
fsize = os.path.getsize(path)
|
||||
if fsize != len(body):
|
||||
tmpl = 'Expected file size %s but found %s'
|
||||
raise TestFailure(tmpl, len(body), fsize)
|
||||
except Exception, err:
|
||||
tmpl = 'Could not read file size for %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
try:
|
||||
f = open(path, 'r')
|
||||
posns = range(0,sz,bs)
|
||||
random.shuffle(posns)
|
||||
data = [None] * (sz/bs)
|
||||
for p in posns:
|
||||
f.seek(p)
|
||||
data[p/bs] = f.read(bs)
|
||||
found = ''.join(data)
|
||||
except Exception, err:
|
||||
tmpl = 'Could not read file %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
if found != body:
|
||||
tmpl = 'Expected file contents %s but found %s'
|
||||
raise TestFailure(tmpl, drepr(body), drepr(found))
|
||||
|
||||
def get_file(self, dircap, path):
|
||||
body = self.webapi_call('GET', '/uri/%s/%s' % (dircap, path))
|
||||
return body
|
||||
|
||||
def test_write_tiny_file(self, testcap, testdir):
|
||||
self._write_test_linear(testcap, testdir, name='tiny.junk', bs=2**9, sz=2**9)
|
||||
|
||||
def test_write_linear_small_writes(self, testcap, testdir):
|
||||
self._write_test_linear(testcap, testdir, name='large_linear.junk', bs=2**9, sz=2**20)
|
||||
|
||||
def test_write_linear_large_writes(self, testcap, testdir):
|
||||
# at least on the mac, large io block sizes are reduced to 64k writes through fuse
|
||||
self._write_test_linear(testcap, testdir, name='small_linear.junk', bs=2**18, sz=2**20)
|
||||
|
||||
def _write_test_linear(self, testcap, testdir, name, bs, sz):
|
||||
body = os.urandom(sz)
|
||||
try:
|
||||
path = os.path.join(testdir, name)
|
||||
f = file(path, 'w')
|
||||
except Exception, err:
|
||||
tmpl = 'Could not open file for write at %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
try:
|
||||
for posn in range(0,sz,bs):
|
||||
f.write(body[posn:posn+bs])
|
||||
f.close()
|
||||
except Exception, err:
|
||||
tmpl = 'Could not write to file %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
self.maybe_pause()
|
||||
self._check_write(testcap, name, body)
|
||||
|
||||
def _check_write(self, testcap, name, expected_body):
|
||||
uploaded_body = self.get_file(testcap, name)
|
||||
if uploaded_body != expected_body:
|
||||
tmpl = 'Expected file contents %s but found %s'
|
||||
raise TestFailure(tmpl, drepr(expected_body), drepr(uploaded_body))
|
||||
|
||||
def test_write_overlapping_small_writes(self, testcap, testdir):
|
||||
self._write_test_overlap(testcap, testdir, name='large_overlap', bs=2**9, sz=2**20)
|
||||
|
||||
def test_write_overlapping_large_writes(self, testcap, testdir):
|
||||
self._write_test_overlap(testcap, testdir, name='small_overlap', bs=2**18, sz=2**20)
|
||||
|
||||
def _write_test_overlap(self, testcap, testdir, name, bs, sz):
|
||||
body = os.urandom(sz)
|
||||
try:
|
||||
path = os.path.join(testdir, name)
|
||||
f = file(path, 'w')
|
||||
except Exception, err:
|
||||
tmpl = 'Could not open file for write at %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
try:
|
||||
for posn in range(0,sz,bs):
|
||||
start = max(0, posn-bs)
|
||||
end = min(sz, posn+bs)
|
||||
f.seek(start)
|
||||
f.write(body[start:end])
|
||||
f.close()
|
||||
except Exception, err:
|
||||
tmpl = 'Could not write to file %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
self.maybe_pause()
|
||||
self._check_write(testcap, name, body)
|
||||
|
||||
|
||||
def test_write_random_scatter(self, testcap, testdir):
|
||||
sz = 2**20
|
||||
name = 'random_scatter'
|
||||
body = os.urandom(sz)
|
||||
|
||||
def rsize(sz=sz):
|
||||
return min(int(random.paretovariate(.25)), sz/12)
|
||||
|
||||
# first chop up whole file into random sized chunks
|
||||
slices = []
|
||||
posn = 0
|
||||
while posn < sz:
|
||||
size = rsize()
|
||||
slices.append( (posn, body[posn:posn+size]) )
|
||||
posn += size
|
||||
random.shuffle(slices) # and randomise their order
|
||||
|
||||
try:
|
||||
path = os.path.join(testdir, name)
|
||||
f = file(path, 'w')
|
||||
except Exception, err:
|
||||
tmpl = 'Could not open file for write at %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
try:
|
||||
# write all slices: we hence know entire file is ultimately written
|
||||
# write random excerpts: this provides for mixed and varied overlaps
|
||||
for posn,slice in slices:
|
||||
f.seek(posn)
|
||||
f.write(slice)
|
||||
rposn = random.randint(0,sz)
|
||||
f.seek(rposn)
|
||||
f.write(body[rposn:rposn+rsize()])
|
||||
f.close()
|
||||
except Exception, err:
|
||||
tmpl = 'Could not write to file %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
self.maybe_pause()
|
||||
self._check_write(testcap, name, body)
|
||||
|
||||
def test_write_partial_overwrite(self, testcap, testdir):
|
||||
name = 'partial_overwrite'
|
||||
body = '_'*132
|
||||
overwrite = '^'*8
|
||||
position = 26
|
||||
|
||||
def write_file(path, mode, contents, position=None):
|
||||
try:
|
||||
f = file(path, mode)
|
||||
if position is not None:
|
||||
f.seek(position)
|
||||
f.write(contents)
|
||||
f.close()
|
||||
except Exception, err:
|
||||
tmpl = 'Could not write to file %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
|
||||
def read_file(path):
|
||||
try:
|
||||
f = file(path, 'rb')
|
||||
contents = f.read()
|
||||
f.close()
|
||||
except Exception, err:
|
||||
tmpl = 'Could not read file %r: %r'
|
||||
raise TestFailure(tmpl, path, err)
|
||||
return contents
|
||||
|
||||
path = os.path.join(testdir, name)
|
||||
#write_file(path, 'w', body)
|
||||
|
||||
cap = self.webapi_call('PUT', '/uri', body)
|
||||
self.attach_node(testcap, cap, name)
|
||||
self.maybe_pause()
|
||||
|
||||
contents = read_file(path)
|
||||
if contents != body:
|
||||
raise TestFailure('File contents mismatch (%r) %r v.s. %r', path, contents, body)
|
||||
|
||||
write_file(path, 'r+', overwrite, position)
|
||||
contents = read_file(path)
|
||||
expected = body[:position] + overwrite + body[position+len(overwrite):]
|
||||
if contents != expected:
|
||||
raise TestFailure('File contents mismatch (%r) %r v.s. %r', path, contents, expected)
|
||||
|
||||
|
||||
# Utilities:
|
||||
def run_tahoe(self, *args):
|
||||
realargs = ('tahoe',) + args
|
||||
status, output = gather_output(realargs, executable=self.cliexec)
|
||||
if status != 0:
|
||||
tmpl = 'The tahoe cli exited with nonzero status.\n'
|
||||
tmpl += 'Executable: %r\n'
|
||||
tmpl += 'Command arguments: %r\n'
|
||||
tmpl += 'Exit status: %r\n'
|
||||
tmpl += 'Output:\n%s\n[End of tahoe output.]\n'
|
||||
raise SetupFailure(tmpl,
|
||||
self.cliexec,
|
||||
realargs,
|
||||
status,
|
||||
output)
|
||||
return output
|
||||
|
||||
def check_tahoe_output(self, output, expected, expdir):
|
||||
ignorable_lines = map(re.compile, [
|
||||
'.*site-packages/zope\.interface.*\.egg/zope/__init__.py:3: UserWarning: Module twisted was already imported from .*egg is being added to sys.path',
|
||||
' import pkg_resources',
|
||||
])
|
||||
def ignore_line(line):
|
||||
for ignorable_line in ignorable_lines:
|
||||
if ignorable_line.match(line):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
output = '\n'.join( [ line
|
||||
for line in output.split('\n')+['']
|
||||
#if line not in ignorable_lines ] )
|
||||
if not ignore_line(line) ] )
|
||||
m = re.match(expected, output, re.M)
|
||||
if m is None:
|
||||
tmpl = 'The output of tahoe did not match the expectation:\n'
|
||||
tmpl += 'Expected regex: %s\n'
|
||||
tmpl += 'Actual output: %r\n'
|
||||
self.warn(tmpl, expected, output)
|
||||
|
||||
elif expdir != m.group('path'):
|
||||
tmpl = 'The output of tahoe refers to an unexpected directory:\n'
|
||||
tmpl += 'Expected directory: %r\n'
|
||||
tmpl += 'Actual directory: %r\n'
|
||||
self.warn(tmpl, expdir, m.group(1))
|
||||
|
||||
def stop_node(self, basedir):
|
||||
try:
|
||||
self.run_tahoe('stop', '--basedir', basedir)
|
||||
except Exception, e:
|
||||
print 'Failed to stop tahoe node.'
|
||||
print 'Ignoring cleanup exception:'
|
||||
# Indent the exception description:
|
||||
desc = str(e).rstrip()
|
||||
print ' ' + desc.replace('\n', '\n ')
|
||||
|
||||
def webapi_call(self, method, path, body=None, **options):
|
||||
if options:
|
||||
path = path + '?' + ('&'.join(['%s=%s' % kv for kv in options.items()]))
|
||||
|
||||
conn = httplib.HTTPConnection('127.0.0.1', self.port)
|
||||
conn.request(method, path, body = body)
|
||||
resp = conn.getresponse()
|
||||
|
||||
if resp.status != 200:
|
||||
tmpl = 'A webapi operation failed.\n'
|
||||
tmpl += 'Request: %r %r\n'
|
||||
tmpl += 'Body:\n%s\n'
|
||||
tmpl += 'Response:\nStatus %r\nBody:\n%s'
|
||||
raise SetupFailure(tmpl,
|
||||
method, path,
|
||||
body or '',
|
||||
resp.status, body)
|
||||
|
||||
return resp.read()
|
||||
|
||||
def create_dirnode(self):
|
||||
return self.webapi_call('PUT', '/uri', t='mkdir').strip()
|
||||
|
||||
def attach_node(self, dircap, childcap, childname):
|
||||
body = self.webapi_call('PUT',
|
||||
'/uri/%s/%s' % (dircap, childname),
|
||||
body = childcap,
|
||||
t = 'uri',
|
||||
replace = 'false')
|
||||
assert body.strip() == childcap, `body, dircap, childcap, childname`
|
||||
|
||||
def polling_operation(self, operation, polldesc, timeout = 10.0, pollinterval = 0.2):
|
||||
totaltime = timeout # Fudging for edge-case SetupFailure description...
|
||||
|
||||
totalattempts = int(timeout / pollinterval)
|
||||
|
||||
starttime = time.time()
|
||||
for attempt in range(totalattempts):
|
||||
opstart = time.time()
|
||||
|
||||
try:
|
||||
result = operation()
|
||||
except KeyboardInterrupt, e:
|
||||
raise
|
||||
except Exception, e:
|
||||
result = False
|
||||
|
||||
totaltime = time.time() - starttime
|
||||
|
||||
if result is not False:
|
||||
#tmpl = '(Polling took over %.2f seconds.)'
|
||||
#print tmpl % (totaltime,)
|
||||
return result
|
||||
|
||||
elif totaltime > timeout:
|
||||
break
|
||||
|
||||
else:
|
||||
opdelay = time.time() - opstart
|
||||
realinterval = max(0., pollinterval - opdelay)
|
||||
|
||||
#tmpl = '(Poll attempt %d failed after %.2f seconds, sleeping %.2f seconds.)'
|
||||
#print tmpl % (attempt+1, opdelay, realinterval)
|
||||
time.sleep(realinterval)
|
||||
|
||||
|
||||
tmpl = 'Timeout while polling for: %s\n'
|
||||
tmpl += 'Waited %.2f seconds (%d polls).'
|
||||
raise SetupFailure(tmpl, polldesc, totaltime, attempt+1)
|
||||
|
||||
def warn(self, tmpl, *args):
|
||||
print ('Test Warning: ' + tmpl) % args
|
||||
|
||||
|
||||
# SystemTest Exceptions:
|
||||
class Failure (Exception):
|
||||
def __init__(self, tmpl, *args):
|
||||
msg = self.Prefix + (tmpl % args)
|
||||
Exception.__init__(self, msg)
|
||||
|
||||
class SetupFailure (Failure):
|
||||
Prefix = 'Setup Failure - The test framework encountered an error:\n'
|
||||
|
||||
class TestFailure (Failure):
|
||||
Prefix = 'TestFailure: '
|
||||
|
||||
|
||||
### Unit Tests:
|
||||
class Impl_A_UnitTests (unittest.TestCase):
|
||||
'''Tests small stand-alone functions.'''
|
||||
def test_canonicalize_cap(self):
|
||||
iopairs = [('http://127.0.0.1:3456/uri/URI:DIR2:yar9nnzsho6czczieeesc65sry:upp1pmypwxits3w9izkszgo1zbdnsyk3nm6h7e19s7os7s6yhh9y',
|
||||
'URI:DIR2:yar9nnzsho6czczieeesc65sry:upp1pmypwxits3w9izkszgo1zbdnsyk3nm6h7e19s7os7s6yhh9y'),
|
||||
('http://127.0.0.1:3456/uri/URI%3ACHK%3Ak7ktp1qr7szmt98s1y3ha61d9w%3A8tiy8drttp65u79pjn7hs31po83e514zifdejidyeo1ee8nsqfyy%3A3%3A12%3A242?filename=welcome.html',
|
||||
'URI:CHK:k7ktp1qr7szmt98s1y3ha61d9w:8tiy8drttp65u79pjn7hs31po83e514zifdejidyeo1ee8nsqfyy:3:12:242?filename=welcome.html')]
|
||||
|
||||
for input, output in iopairs:
|
||||
result = impl_a.canonicalize_cap(input)
|
||||
self.failUnlessEqual(output, result, 'input == %r' % (input,))
|
||||
|
||||
|
||||
|
||||
### Misc:
|
||||
class ImplProcessManager(object):
|
||||
debug_wait = False
|
||||
|
||||
def __init__(self, name, module, mount_args, mount_wait, suites, todo=False):
|
||||
self.name = name
|
||||
self.module = module
|
||||
self.script = module.__file__
|
||||
self.mount_args = mount_args
|
||||
self.mount_wait = mount_wait
|
||||
self.suites = suites
|
||||
self.todo = todo
|
||||
|
||||
def maybe_wait(self, msg='waiting'):
|
||||
if self.debug_wait:
|
||||
print msg
|
||||
raw_input()
|
||||
|
||||
def configure(self, client_nodedir, mountpoint):
|
||||
self.client_nodedir = client_nodedir
|
||||
self.mountpath = os.path.join(mountpoint, self.name)
|
||||
os.mkdir(self.mountpath)
|
||||
|
||||
def mount(self):
|
||||
print 'Mounting implementation: %s (%s)' % (self.name, self.script)
|
||||
|
||||
rootdirfile = os.path.join(self.client_nodedir, 'private', 'root_dir.cap')
|
||||
root_uri = file(rootdirfile, 'r').read().strip()
|
||||
fields = {'mountpath': self.mountpath,
|
||||
'nodedir': self.client_nodedir,
|
||||
'root-uri': root_uri,
|
||||
}
|
||||
args = ['python', self.script] + [ arg%fields for arg in self.mount_args ]
|
||||
print ' '.join(args)
|
||||
self.maybe_wait('waiting (about to launch fuse)')
|
||||
|
||||
if self.mount_wait:
|
||||
exitcode, output = gather_output(args)
|
||||
if exitcode != 0:
|
||||
tmpl = '%r failed to launch:\n'
|
||||
tmpl += 'Exit Status: %r\n'
|
||||
tmpl += 'Output:\n%s\n'
|
||||
raise SetupFailure(tmpl, self.script, exitcode, output)
|
||||
else:
|
||||
self.proc = subprocess.Popen(args)
|
||||
|
||||
def umount(self):
|
||||
print 'Unmounting implementation: %s' % (self.name,)
|
||||
args = UNMOUNT_CMD + [self.mountpath]
|
||||
print args
|
||||
self.maybe_wait('waiting (unmount)')
|
||||
#print os.system('ls -l '+self.mountpath)
|
||||
ec, out = gather_output(args)
|
||||
if ec != 0 or out:
|
||||
tmpl = '%r failed to unmount:\n' % (' '.join(UNMOUNT_CMD),)
|
||||
tmpl += 'Arguments: %r\n'
|
||||
tmpl += 'Exit Status: %r\n'
|
||||
tmpl += 'Output:\n%s\n'
|
||||
raise SetupFailure(tmpl, args, ec, out)
|
||||
|
||||
|
||||
def gather_output(*args, **kwargs):
|
||||
'''
|
||||
This expects the child does not require input and that it closes
|
||||
stdout/err eventually.
|
||||
'''
|
||||
p = subprocess.Popen(stdout = subprocess.PIPE,
|
||||
stderr = subprocess.STDOUT,
|
||||
*args,
|
||||
**kwargs)
|
||||
output = p.stdout.read()
|
||||
exitcode = p.wait()
|
||||
return (exitcode, output)
|
||||
|
||||
|
||||
def wrap_os_error(meth, *args):
|
||||
try:
|
||||
return meth(*args)
|
||||
except os.error, e:
|
||||
raise TestFailure('%s', e)
|
||||
|
||||
|
||||
ExpectedCreationOutput = r'(introducer|client) created in (?P<path>.*?)\n'
|
||||
ExpectedStartOutput = r'(.*\n)*STARTING (?P<path>.*?)\n(introducer|client) node probably started'
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv))
|
@ -221,17 +221,6 @@ Apache Licence.
|
||||
this License.
|
||||
------- end TGPPL1 licence
|
||||
|
||||
The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
|
||||
the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
|
||||
Henk granted permission for those files to be under the same terms as
|
||||
Tahoe-LAFS itself.
|
||||
|
||||
See /usr/share/common-licenses/GPL for a copy of the GNU General Public
|
||||
License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
|
||||
License.
|
||||
|
||||
The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
|
||||
|
||||
------- begin BSD licence
|
||||
Copyright (c) <YEAR>, <OWNER>
|
||||
All rights reserved.
|
||||
|
@ -221,17 +221,6 @@ Apache Licence.
|
||||
this License.
|
||||
------- end TGPPL1 licence
|
||||
|
||||
The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
|
||||
the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
|
||||
Henk granted permission for those files to be under the same terms as
|
||||
Tahoe-LAFS itself.
|
||||
|
||||
See /usr/share/common-licenses/GPL for a copy of the GNU General Public
|
||||
License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
|
||||
License.
|
||||
|
||||
The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
|
||||
|
||||
------- begin BSD licence
|
||||
Copyright (c) <YEAR>, <OWNER>
|
||||
All rights reserved.
|
||||
|
@ -221,17 +221,6 @@ Apache Licence.
|
||||
this License.
|
||||
------- end TGPPL1 licence
|
||||
|
||||
The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
|
||||
the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
|
||||
Henk granted permission for those files to be under the same terms as
|
||||
Tahoe-LAFS itself.
|
||||
|
||||
See /usr/share/common-licenses/GPL for a copy of the GNU General Public
|
||||
License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
|
||||
License.
|
||||
|
||||
The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
|
||||
|
||||
------- begin BSD licence
|
||||
Copyright (c) <YEAR>, <OWNER>
|
||||
All rights reserved.
|
||||
|
@ -221,17 +221,6 @@ Apache Licence.
|
||||
this License.
|
||||
------- end TGPPL1 licence
|
||||
|
||||
The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
|
||||
the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
|
||||
Henk granted permission for those files to be under the same terms as
|
||||
Tahoe-LAFS itself.
|
||||
|
||||
See /usr/share/common-licenses/GPL for a copy of the GNU General Public
|
||||
License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
|
||||
License.
|
||||
|
||||
The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
|
||||
|
||||
------- begin BSD licence
|
||||
Copyright (c) <YEAR>, <OWNER>
|
||||
All rights reserved.
|
||||
|
@ -1,2 +0,0 @@
|
||||
^/home/warner/stuff/python/twisted/Twisted/
|
||||
^/var/lib
|
Loading…
Reference in New Issue
Block a user