import a snapshot of foolscap 0.1.2+

This commit is contained in:
Zooko O'Whielacronx 2007-04-29 14:24:17 -07:00
parent a7d7f5cc96
commit 819278237b
111 changed files with 26986 additions and 0 deletions

1806
src/foolscap/ChangeLog Normal file

File diff suppressed because it is too large Load Diff

10
src/foolscap/MANIFEST.in Normal file
View File

@ -0,0 +1,10 @@
include ChangeLog MANIFEST.in NEWS
include doc/*.txt doc/*.xhtml
include doc/listings/*.py
include doc/specifications/*.xhtml
include Makefile
include misc/dapper/debian/*
include misc/edgy/debian/*
include misc/feisty/debian/*
include misc/sid/debian/*
include misc/sarge/debian/*

55
src/foolscap/Makefile Normal file
View File

@ -0,0 +1,55 @@
.PHONY: build test debian-sid debian-dapper debian-feisty debian-sarge
.PHONY: debian-edgy
build:
python setup.py build
TEST=foolscap
test:
trial $(TEST)
test-figleaf:
rm -f .figleaf
PYTHONPATH=misc/testutils trial --reporter=bwverbose-figleaf $(TEST)
figleaf-output:
rm -rf coverage-html
PYTHONPATH=misc/testutils python misc/testutils/figleaf2html -d coverage-html -r .
@echo "now point your browser at coverage-html/index.html"
debian-sid:
rm -f debian
ln -s misc/sid/debian debian
chmod a+x debian/rules
debuild -uc -us
debian-dapper:
rm -f debian
ln -s misc/dapper/debian debian
chmod a+x debian/rules
debuild -uc -us
debian-edgy:
rm -f debian
ln -s misc/edgy/debian debian
chmod a+x debian/rules
debuild -uc -us
debian-feisty:
rm -f debian
ln -s misc/feisty/debian debian
chmod a+x debian/rules
debuild -uc -us
debian-sarge:
rm -f debian
ln -s misc/sarge/debian debian
chmod a+x debian/rules
debuild -uc -us
DOC_TEMPLATE=doc/template.tpl
docs:
lore -p --config template=$(DOC_TEMPLATE) --config ext=.html \
`find doc -name '*.xhtml'`

576
src/foolscap/NEWS Normal file
View File

@ -0,0 +1,576 @@
User visible changes in Foolscap (aka newpb/pb2). -*- outline -*-
* Release 0.1.2 (04 Apr 2007)
** Bugfixes
Yesterday's release had a bug in the new SetConstraint which rendered it
completely unusable. This has been fixed, along with some new tests.
** More debian packaging
Some control scripts were added to make it easier to create debian packages
for the Ubuntu 'edgy' and 'feisty' distributions.
* Release 0.1.1 (03 Apr 2007)
** Incompatibility Warning
Because of the technique used to implement callRemoteOnly() (specifically the
commandeering of reqID=0), this release is not compatible with the previous
release. The protocol negotiation version numbers have been bumped to avoid
confusion, meaning that 0.1.0 Tubs will refuse to connect to 0.1.1 Tubs, and
vice versa. Be aware that the errors reported when this occurs may not be
ideal, in particular I think the "reconnector" (tub.connectTo) might not log
this sort of connection failure in a very useful way.
** changes to Constraints
Method specifications inside RemoteInterfaces can now accept or return
'Referenceable' to indicate that they will accept a Referenceable of any
sort. Likewise, they can use something like 'RIFoo' to indicate that they
want a Referenceable or RemoteReference that implements RIFoo. Note that this
restriction does not quite nail down the directionality: in particular there
is not yet a way to specify that the method will only accept a Referenceable
and not a RemoteReference. I'm waiting to see if such a thing is actually
useful before implementing it. As an example:
class RIUser(RemoteInterface):
def get_age():
return int
class RIUserListing(RemoteInterface):
def get_user(name=str):
"""Get the User object for a given name."""
return RIUser
In addition, several constraints have been enhanced. StringConstraint and
ListConstraint now accept a minLength= argument, and StringConstraint also
takes a regular expression to apply to the string it inspects (the regexp can
either be passed as a string or as the output of re.compile()). There is a
new SetConstraint object, with 'SetOf' as a short alias. Some examples:
HexIdConstraint = StringConstraint(minLength=20, maxLength=20,
regexp=r'[\dA-Fa-f]+')
class RITable(RemoteInterface):
def get_users_by_id(id=HexIdConstraint):
"""Get a set of User objects; all will have the same ID number."""
return SetOf(RIUser, maxLength=200)
These constraints should be imported from foolscap.schema . Once the
constraint interface is stabilized and documented, these classes will
probably be moved into foolscap/__init__.py so that you can just do 'from
foolscap import SetOf', etc.
*** UnconstrainedMethod
To disable schema checking for a specific method, use UnconstrainedMethod in
the RemoteInterface definition:
from foolscap.remoteinterface import UnconstrainedMethod
class RIUse(RemoteInterface):
def set_phone_number(area_code=int, number=int):
return bool
set_arbitrary_data = UnconstrainedMethod
The schema-checking code will allow any sorts of arguments through to this
remote method, and allow any return value. This is like schema.Any(), but for
entire methods instead of just specific values. Obviously, using this defeats
te whole purpose of schema checking, but in some circumstances it might be
preferable to allow one or two unconstrained methods rather than resorting to
leaving the entire class left unconstrained (by not declaring a
RemoteInterface at all).
*** internal schema implementation changes
Constraints underwent a massive internal refactoring in this release, to
avoid a number of messy circular imports. The new way to convert a
"shorthand" description (like 'str') into an actual constraint object (like
StringConstraint) is to adapt it to IConstraint.
In addition, all constraints were moved closer to their associated
slicer/unslicer definitions. For example, SetConstraint is defined in
foolscap.slicers.set, right next to SetSlicer and SetUnslicer. The
constraints for basic tokens (like lists and ints) live in
foolscap.constraint .
** callRemoteOnly
A new "fire and forget" API was added to tell Foolscap that you want to send
a message to the remote end, but do not care when or even whether it arrives.
These messages are guaranteed to not fire an errback if the connection is
already lost (DeadReferenceError) or if the connection is lost before the
message is delivered or the response comes back (ConnectionLost). At present,
this no-error philosophy is so strong that even schema Violation exceptions
are suppressed, and the callRemoteOnly() method always returns None instead
of a Deferred. This last part might change in the future.
This is most useful for messages that are tightly coupled to the connection
itself, such that if the connection is lost, then it won't matter whether the
message was received or not. If the only state that the message modifies is
both scoped to the connection (i.e. not used anywhere else in the receiving
application) and only affects *inbound* data, then callRemoteOnly might be
useful. It may involve less error-checking code on the senders side, and it
may involve fewer round trips (since no response will be generated when the
message is delivered).
As a contrived example, a message which informs the far end that all
subsequent messages on this connection will sent entirely in uppercase (such
that the recipient should apply some sort of filter to them) would be
suitable for callRemoteOnly. The sender does not need to know exactly when
the message has been received, since Foolscap guarantees that all
subsequently sent messages will be delivered *after* the 'SetUpperCase'
message. And, the sender does not need to know whether the connection was
lost before or after the receipt of the message, since the establishment of a
new connection will reset this 'uppercase' flag back to some known
initial-contact state.
rref.callRemoteOnly("set_uppercase", True) # returns None!
This method is intended to parallel the 'deliverOnly' method used in E's
CapTP protocol. It is also used (or will be used) in some internal Foolscap
messages to reduce unnecessary network traffic.
** new Slicers: builtin set/frozenset
Code has been added to allow Foolscap to handle the built-in 'set' and
'frozenset' types that were introduced in python-2.4 . The wire protocol does
not distinguish between 'set' and 'sets.Set', nor between 'frozenset' and
'sets.ImmutableSet'.
For the sake of compatibility, everything that comes out of the deserializer
uses the pre-2.4 'sets' module. Unfortunately that means that a 'set' sent
into a Foolscap connection will come back out as a 'sets.Set'. 'set' and
'sets.Set' are not entirely interoperable, and concise things like 'added =
new_things - old_things' will not work if the objects are of different types
(but note that things like 'added = new_things.difference(old_things)' *do*
work).
The current workaround is for remote methods to coerce everything to a
locally-preferred form before use. Better solutions to this are still being
sought. The most promising approach is for Foolscap to unconditionally
deserialize to the builtin types on python >= 2.4, but then an application
which works fine on 2.3 (by using sets.Set) will fail when moved to 2.4 .
** Tub.stopService now indicates full connection shutdown, helping Trial tests
Like all twisted.application.service.MultiService instances, the
Tub.stopService() method returns a Deferred that indicates when shutdown has
finished. Previously, this Deferred could fire a bit early, when network
connections were still trying to deliver the last bits of data. This caused
problems with the Trial unit test framework, which insist upon having a clean
reactor between tests.
Trial test writers who use Foolscap should include the following sequence in
their twisted.trial.unittest.TestCase.tearDown() methods:
def tearDown(self):
from foolscap.eventual import flushEventualQueue
d = tub.stopService()
d.addCallback(flushEventualQueue)
return d
This will insure that all network activity is complete, and that all message
deliveries thus triggered have been retired. This activity includes any
outbound connections that were initiated (but not completed, or finished
negotiating), as well as any listening sockets.
The only remaining problem I've seen so far is with reactor.resolve(), which
is used to translate DNS names into addresses, and has a window during which
you can shut down the Tub and it will leave a cleanup timer lying around. The
only solution I've found is to avoid using DNS names in URLs. Of course for
real applications this does not matter: it only makes a difference in Trial
unit tests which are making heavy use of short-lived Tubs and connections.
* Release 0.1.0 (15 Mar 2007)
** usability improvements
*** Tubs now have a certFile= argument
A certFile= argument has been added to the Tub constructor to allow the Tub
to manage its own certificates. This argument provides a filename where the
Tub should read or write its certificate. If the file exists, the Tub will
read the certificate data from there. If not, the Tub will generate a new
certificate and write it to the file.
The idea is that you can point certFile= at a persistent location on disk,
perhaps in the application's configuration or preferences subdirectory, and
then not need to distinguish between the first time the Tub has been created
and later invocations. This allows the Tub's identity (derived from the
certificate) to remain stable from one invocation to the next. The related
problem of how to make (unguessable) object names persistent from one program
run to the next is still outstanding, but I expect to implement something
similar in the future (some sort of file to which object names are written
and read later).
certFile= is meant to be used somewhat like this:
where = os.path.expanduser("~/.myapp.cert")
t = Tub(certFile=where)
t.registerReference(obj) # ...
*** All eventual-sends are retired on each reactor tick, not just one.
Applications which make extensive use of the eventual-send operations (in
foolscap.eventual) will probably run more smoothly now. In previous releases,
the _SimpleCallQueue class would only execute a single eventual-send call per
tick, then take care of all pending IO (and any pending timers) before
servicing the next eventual-send. This could probably lead to starvation, as
those eventual-sends might generate more work (and cause more network IO),
which could cause the event queue to grow without bound. The new approach
finishes as much eventual-send work as possible before accepting any IO. Any
new eventual-sends which are queued during the current tick will be put off
until the next tick, but everything which was queued before the current tick
will be retired in the current tick.
** bug fixes
*** Tub certificates can now be used the moment they are created
In previous releases, Tubs were only willing to accept SSL certificates that
created before the moment of checking. If two systems A and B had
unsynchronized clocks, and a Foolscap-using application on A was run for the
first time to connect to B (thus creating a new SSL certificate), system B
might reject the certificate because it looks like it comes from the future.
This problem is endemic in systems which attempt to use the passage of time
as a form of revocation. For now at least, to resolve the practical problem
of certificates generated on demand and used by systems with unsynchronized
clocks, Foolscap does not use certificate lifetimes, and will ignore
timestamps on the certificates it examines.
* Release 0.0.7 (16 Jan 2007)
** bug fixes
*** Tubs can now connect to themselves
In previous releases, Tubs were unable to connect to themselves: the
following code would fail (the negotiation would never complete, so the
connection attempt would eventually time out after about 30 seconds):
url = mytub.registerReference(target)
d = mytub.getReference(url)
In release 0.0.7, this has been fixed by catching this case and making it use
a special loopback transport (which serializes all messages but does not send
them over a wire). There may be still be problems with this code, in
particular connection shutdown is not completely tested and producer/consumer
code is completely untested.
*** Tubs can now getReference() the same URL multiple times
A bug was present in the RemoteReference-unslicing code which caused the
following code to fail:
d = mytub.getReference(url)
d.addCallback(lambda ref: mytub.getReference(url))
In particular, the second call to getReference() would return None rather
than the RemoteReference it was supposed to.
This bug has been fixed. If the previous RemoteReference is still alive, it
will be returned by the subsequent getReference() call. If it has been
garbage-collected, a new one will be created.
*** minor fixes
Negotiation errors (such as having incompatible versions of Foolscap on
either end of the wire) may be reported more usefully.
In certain circumstances, disconnecting the Tub service from a parent service
might have caused an exception before. It might behave better now.
* Release 0.0.6 (18 Dec 2006)
** INCOMPATIBLE PROTOCOL CHANGES
Version 0.0.6 will not interoperate with versions 0.0.5 or earlier, because
of changes to the negotiation process and the method-calling portion of the
main wire protocol. (you were warned :-). There are still more incompatible
changes to come in future versions as the feature set and protocol
stabilizes. Make sure you can upgrade both ends of the wire until a protocol
freeze has been declared.
*** Negotiation versions now specify a range, instead of a single number
The two ends of a connection will agree to use the highest mutually-supported
version. This approach should make it much easier to maintain backwards
compatibility in the future.
*** Negotiation now includes an initial VOCAB table
One of the outputs of connection negotiation is the initial table of VOCAB
tokens to use for abbreviating commonly-used strings into short tokens
(usually just 2 bytes). Both ends have the ability to modify this table at any
time, but by setting the initial table during negotiation we same some
protocol traffic. VOCAB-izing common strings (like 'list' and 'dict') have
the potential to compress wire traffic by maybe 50%.
*** remote methods now accept both positional and keyword arguments
Previously you had to use a RemoteInterface specification to be able to pass
positional arguments into callRemote(). (the RemoteInterface schema was used
to convert the positional arguments into keyword arguments before sending
them over the wire). In 0.0.6 you can pass both posargs and kwargs over the
wire, and the remote end will pass them directly to the target method. When
schemas are in effect, the arguments you send will be mapped to the method's
named parameters in the same left-to-right way that python does it. This
should make it easier to port oldpb code to use Foolscap, since you don't
have to rewrite everything to use kwargs exclusively.
** Schemas now allow =None and =RIFoo
You can use 'None' in a method schema to indicate that the argument or return
value must be None. This is useful for methods that always return None. You
can also require that the argument be a RemoteReference that provides a
particular RemoteInterface. For example:
class RIUser(RemoteInterface):
def get_age():
return int
def delete():
return None
class RIUserDatabase(RemoteInterface):
def get_user(username=str):
return RIUser
Note that these remote interface specifications are parsed at import time, so
any names they refer to must be defined before they get used (hence placing
RIUserDatabase before RIUser would fail). Hopefully we'll figure out a way to
fix this in the future.
** Violations are now annotated better, might keep more stack-trace information
** Copyable improvements
The Copyable documentation has been split out to docs/copyable.xhtml and
somewhat expanded.
The new preferred Copyable usage is to have a class-level attribute named
"typeToCopy" which holds the unique string. This must match the class-level
"copytype" attribute of the corresponding RemoteCopy class. Copyable
subclasses (or ICopyable adapters) may still implement getTypeToCopy(), but
the default just returns self.typeToCopy . Most significantly, we no longer
automatically use the fully-qualified classname: instead we *require* that
the class definition include "typeToCopy". Feel free to use any stable and
globally-unique string here, like a URI in a namespace that you control, or
the fully-qualified package/module/classname of the Copyable subclass.
The RemoteCopy subclass must set the 'copytype' attribute, as it is used for
auto-registration. These can set copytype=None to inhibit auto-registration.
* Release 0.0.5 (04 Nov 2006)
** add Tub.setOption, add logRemoteFailures and logLocalFailures
These options control whether we log exceptions (to the standard twisted log)
that occur on other systems in response to messages that we've sent, and that
occur on our system in response to messages that we've received
(respectively). These may be useful while developing a distributed
application. All such log messages have each line of the stack trace prefixed
by REMOTE: or LOCAL: to make it clear where the exception is happening.
** add sarge packaging, improve dependencies for sid and dapper .debs
** fix typo that prevented Reconnector from actually reconnecting
* Release 0.0.4 (26 Oct 2006)
** API Changes
*** notifyOnDisconnect() takes args/kwargs
RemoteReference.notifyOnDisconnect(), which registers a callback to be fired
when the connection to this RemoteReference is lost, now accepts args and
kwargs to be passed to the callback function. Without this, application code
needed to use inner functions or bound methods to close over any additional
state you wanted to get into the disconnect handler.
notifyOnDisconnect() returns a "marker", an opaque values that should be
passed into the corresponding dontNotifyOnDisconnect() function to deregister
the callback. (previously dontNotifyOnDisconnect just took the same argument
as notifyOnDisconnect).
For example:
class Foo:
def _disconnect(self, who, reason):
print "%s left us, because of %s" % (who, reason)
def connect(self, url, why):
d = self.tub.getReference(url)
def _connected(rref):
self.rref = rref
m = rref.notifyOnDisconnect(self._disconnect, who, reason=why)
self.marker = m
d.addCallback(_connected)
def stop_caring(self):
self.rref.dontNotifyOnDisconnect(self.marker)
*** Reconnector / Tub.connectTo()
There is a new connection API for applications that want to connect to a
target and to reconnect to it if/when that connection is lost. This is like
ReconnectingClientFactory, but at a higher layer. You give it a URL to
connect to, and a callback (plus args/kwargs) that should be called each time
a connection is established. Your callback should use notifyOnDisconnect() to
find out when it is disconnected. Reconnection attempts use exponential
backoff to limit the retry rate, and you can shut off reconnection attempts
when you no longer want to maintain a connection.
Use it something like this:
class Foo:
def __init__(self, tub, url):
self.tub = tub
self.reconnector = tub.connectTo(url, self._connected, "arg")
def _connected(self, rref, arg):
print "connected"
assert arg == "arg"
self.rref = rref
self.rref.callRemote("hello")
self.rref.notifyOnDisconnect(self._disconnected, "blag")
def _disconnected(self, blag):
print "disconnected"
assert blag == "blag"
self.rref = None
def shutdown(self):
self.reconnector.stopConnecting()
Code which uses this pattern will see "connected" events strictly interleaved
with "disconnected" events (i.e. it will never see two "connected" events in
a row, nor two "disconnected" events).
The basic idea is that each time your _connected() method is called, it
should re-initialize all your state by making method calls to the remote
side. When the connection is lost, all that state goes away (since you have
no way to know what is happening until you reconnect).
** Behavioral Changes
*** All Referenceable object are now implicitly "giftable"
In 0.0.3, for a Referenceable to be "giftable" (i.e. useable as the payload
of an introduction), two conditions had to be satisfied. #1: the object must
be published through a Tub with Tub.registerReference(obj). #2: that Tub must
have a location set (with Tub.setLocation). Once those conditions were met,
if the object was sent over a wire from this Tub to another one, the
recipient of the corresponding RemoteReference could pass it on to a third
party. Another side effect of calling registerReference() is that the Tub
retains a strongref to the object, keeping it alive (with respect to gc)
until either the Tub is shut down or the object is explicitly de-registered
with unregisterReference().
Starting in 0.0.4, the first condition has been removed. All objects which
pass through a setLocation'ed Tub will be usable as gifts. This makes it much
more convenient to use third-party references.
Note that the Tub will *not* retain a strongref to these objects (merely a
weakref), so such objects might disappear before the recipient has had a
chance to claim it. The lifecycle of gifts is a subject of much research. The
hope is that, for reasonably punctual recipients, the gift will be kept alive
until they claim it. The whole gift/introduction mechanism is likely to
change in the near future, so this lifetime issue will be revisited in a
later release.
** Build Changes
The source tree now has some support for making debian-style packages (for
both sid and dapper). 'make debian-sid' and 'make debian-dapper' ought to
create a .deb package.
* Release 0.0.3 (05 Oct 2006)
** API Changes
The primary entry point for Foolscap is now the "Tub":
import foolscap
t = foolscap.Tub()
d = t.getReference(pburl)
d.addCallback(self.gotReference)
...
The old "PBService" name is gone, use "Tub" instead. There are now separate
classes for "Tub" and "UnauthenticatedTub", rather than using an "encrypted="
argument. Tubs always use encryption if available: the difference between the
two classes is whether this Tub should use a public key for its identity or
not. Note that you always need encryption to connect to an authenticated Tub.
So install pyopenssl, really.
** eventual send operators
Foolscap now provides 'eventually' and 'fireEventually', to implement the
"eventual send" operator advocated by Mark Miller's "Concurrency Among
Strangers" paper (http://www.erights.org/talks/promises/index.html).
eventually(cb, *args, **kwargs) runs the given call in a later reactor turn.
fireEventually(value=None) returns a Deferred that will be fired (with
'value') in a later turn. These behave a lot like reactor.callLater(0,..),
except that Twisted doesn't actually promise that a pair of callLater(0)s
will be fired in the right order (they usually do under unix, but they
frequently don't under windows). Foolscap's eventually() *does* make this
guarantee. In addition, there is a flushEventualQueue() that is useful for
unit tests, it returns a Deferred that will only fire when the entire queue
is empty. As long as your code only uses eventually() (and not callLater(0)),
putting the following in your trial test cases should keep everything nice
and clean:
def tearDown(self):
return foolscap.flushEventualQueue()
** Promises
An initial implementation of Promises is in foolscap.promise for
experimentation. Only "Near" Promises are implemented to far (promises which
resolve to a local object). Eventually Foolscap will offer "Far" Promises as
well, and you will be able to invoke remote method calls through Promises as
well as RemoteReferences. See foolscap/test/test_promise.py for some hints.
** Bug Fixes
Messages containing "Gifts" (third-party references) are now delivered in the
correct order. In previous versions, the presence of these references could
delay delivery of the containing message, causing methods to be executed out
of order.
The VOCAB-manipulating code used to have nasty race conditions, which should
be all fixed now. This would be more important if we actually used the
VOCAB-manipulating code yet, but we don't.
Lots of internal reorganization (put all slicers in a subpackage), not really
user-visible.
Updated to work with recent Twisted HEAD, specifically changes to sslverify.
This release of Foolscap ought to work with the upcoming Twisted-2.5 .
** Incompatible protocol changes
There are now separate add-vocab and set-vocab sequences, which add a single
new VOCAB token and replace the entire table, respectively. These replace the
previous 'vocab' sequence which behaved like set-vocab does now. This would
be an incompatible protocol change, except that previous versions never sent
the vocab sequence anyways. This version doesn't send either vocab-changing
sequence either, but when we finally do start using it, it'll be ready.
* Release 0.0.2 (14 Sep 2006)
Renamed to "Foolscap", extracted from underneat the Twisted packaged,
consolidated API to allow a simple 'import foolscap'. No new features or bug
fixes relative to pb2-0.0.1 .
* Release 0.0.1 (29 Apr 2006)
First release! All basic features are in place. The wire protocol will almost
certainly change at some point, so compatibility with future versions is not
guaranteed.

14
src/foolscap/PKG-INFO Normal file
View File

@ -0,0 +1,14 @@
Metadata-Version: 1.0
Name: foolscap
Version: 0.1.2+
Summary: Foolscap contains an RPC protocol for Twisted.
Home-page: http://twistedmatrix.com/trac/wiki/FoolsCap
Author: Brian Warner
Author-email: warner@twistedmatrix.com
License: MIT
Description: Foolscap (aka newpb) is a new version of Twisted's native RPC protocol, known
as 'Perspective Broker'. This allows an object in one process to be used by
code in a distant process. This module provides data marshaling, a remote
object reference system, and a capability-based security model.
Platform: UNKNOWN

82
src/foolscap/README Normal file
View File

@ -0,0 +1,82 @@
Foolscap
(aka newpb, aka pb2)
This is a ground-up rewrite of Perspective Broker, which itself is Twisted's
native RPC/RMI protocol (Remote Procedure Call / Remote Method Invocation).
If you have control of both ends of the wire, and are thus not constrained to
use some other protocol like HTTP/XMLRPC/CORBA/etc, you might consider using
Foolscap.
Fundamentally, Foolscap allows you to make a python object in one process
available to code in other processes, which means you can invoke its methods
remotely. This includes a data serialization layer to convey the object
graphs for the arguments and the eventual response, and an object reference
system to keep track of which objects you are connecting to. It uses a
capability-based security model, such that once you create a non-public
object, it is only accessible to clients to whom you've given the
(unguessable) PB-URL. You can of course publish world-visible objects that
have well-known PB-URLs.
Full documentation and examples are in the doc/ directory.
DEPENDENCIES:
* Python 2.4 or later
* Twisted 2.4.0 or later
* PyOpenSSL (tested against 0.6)
INSTALLATION:
To install foolscap into your system's normal python library directory, just
run the following (you will probably have to do this as root):
python setup.py install
You can also just add the foolscap source tree to your PYTHONPATH, since
there are no compile steps or .so/.dll files involved.
COMPATIBILITY:
Foolscap is still under development. The wire protocol is almost certainly
going to change in the near future, so forward compatibility between
versions is *NOT* yet guaranteed. Do not use Foolscap if you do not have
continuing control over both ends of the wire. Foolscap is not yet suitable
for widespread deployment: for production applications please continue to
use oldpb (in twisted.spread).
Foolscap has a built-in version-negotiation mechanism that allows the two
processes to determine how to best communicate with each other. The two ends
will agree upon the highest mutually-supported version for all their
traffic. If they do not have any versions in common, the connection will
fail with a NegotiationError.
Certain releases of Foolscap will remain compatible with earlier releases.
Please check the NEWS file for announcements of compatibility-breaking
changes in any given release.
NAMING:
The established version of PB that has been around for years is referred to
here as "oldpb". The new version contained in this release is known as
"Foolscap", but at various points of its development was known as "newpb" or
"pb2". The release tarballs are named "foolscap-x.y.z". The python module
name is "foolscap" . These names are still in flux. At some point in the
future, we may come up with a suitably clever and confusing name that will
replace any or all of these.
A "foolscap" is a size of paper, probably measuring 17 by 13.5 inches. A
twisted foolscap of paper makes a good fool's cap. Also, "cap" makes me
think of capabilities, and Foolscap is a protocol to implement a distributed
object-capabilities model in python.
AUTHOR:
Brian Warner is responsible for this thing. Please discuss it on the
twisted-python list.
The wiki page at <http://twistedmatrix.com/trac/wiki/FoolsCap> contains
pointers to the latest release, as well as documentation and other
resources.

View File

@ -0,0 +1,236 @@
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Using Pass-By-Copy in Foolscap</title>
<style src="stylesheet-unprocessed.css"></style>
</head>
<body>
<h1>Using Pass-By-Copy in Foolscap</h1>
<p>Certain objects (including subclasses of <code
class="API">foolscap.Copyable</code> and things for which an <code
class="API" base="foolscap.copyable">ICopyable</code> adapter has been
registered) are serialized using copy-by-value semantics. Each such object is
serialized as a (copytype, state) pair of values. On the receiving end, the
"copytype" is looked up in a table to find a suitable deserializer. The
"state" information is passed to this deserializer to create a new instance
that corresponds to the original. Note that the sending and receiving ends
are under no obligation to use the same class on each side: it is fairly
common for the remote form of an object to have different methods than the
original instance.</p>
<p>Copy-by-value (as opposed to copy-by-reference) means that the remote
representation of an object leads an independent existence, unconnected to
the original. Sending the same object multiple times will result in separate
independent copies. Sending the result of a pass-by-copy operation back to
the original sender will, at best, result in the sender holding two separate
objects containing similar state (and at worst will not work at all: not all
RemoteCopies are themselves Copyable).</p>
<p>More complex copy semantics can be accomplished by writing custom Slicer
code. For example, to get an object that is copied by value the first time it
traverses the wire, and then copied by reference all later times, you will
need to write a Slicer/Unslicer pair to implement this functionality.
Likewise the oldpb <code>Cacheable</code> class would need to be implemented
with a custom Slicer/Unslicer pair.</p>
<h2>Copyable</h2>
<p>The easiest way to send your own classes over the wire is to use
<code>Copyable</code>. On the sending side, this requires two things: your
class must inherit from <code class="API">foolscap.Copyable</code>, and it
must define an attribute named <code>typeToCopy</code> with a unique string.
This copytype string is shared between both sides, so it is a good idea to
use a stable and globally unique value: perhaps a URL rooted in a namespace
that you control, or a UUID, or perhaps the fully-qualified
package+module+class name of the class being serialized. Any string will do,
as long as it matches the one used on the receiving side.</p>
<p>The object being sent is asked to provide a state dictionary by calling
its <code class="API"
base="foolscap.copyable.ICopyable">getStateToCopy</code> method. The default
implementation of <code>getStateToCopy</code> will simply return
<code>self.__dict__</code>. You can override <code>getStateToCopy</code> to
control what pieces of the source object get copied to the target. In
particular, you may want to override <code>getStateToCopy</code> if there is
any portion of the object's state that should <b>not</b> be sent over the
wire: references to objects that can not or should not be serialized, or
things that are private to the application. It is common practice to create
an empty dictionary in this method and then copy items into it.</p>
<p>On the receiving side, you must register the copytype and provide a
function to deserialize the state dictionary back into an instance. For each
<code>Copyable</code> subclass you will create a corresponding <code
class="API" base="foolscap">RemoteCopy</code> subclass. There are three
requirements which must be fulfilled by this subclass:</p>
<ol>
<li><code>copytype</code>: Each <code>RemoteCopy</code> needs a
<code>copytype</code> attribute which contains the same string as the
corresponding <code>Copyable</code>'s <code>typeToCopy</code> attribute.
(metaclass magic is used to auto-register the <code>RemoteCopy</code> class
in the global copytype-to-RemoteCopy table when the class is defined. You
can also use <code class="API">foolscap.registerRemoteCopy</code> to
manually register a class).</li>
<li><code>__init__</code>: The <code>RemoteCopy</code> subclass must have
an __init__ method that takes no arguments. When the receiving side is
creating the incoming object, it starts by creating a new instance of the
correct <code>RemoteCopy</code> subclass, and at this point it has no
arguments to work with. Later, once the instance is created, it will
call <code>setCopyableState</code> to populate it.</li>
<li><code>setCopyableState</code>: Your <code>RemoteCopy</code> subclass
must define a method named <code class="API"
base="foolscap.copyable.IRemoteCopy">setCopyableState</code>. This method
will be called with the state dictionary that came out of
<code>getStateToCopy</code> on the sending side, and is expected to set any
necessary internal state.</li>
</ol>
<p>Note that <code>RemoteCopy</code> is a new-style class: if you want your
copies to be old-style classes, inherit from <code>RemoteCopyOldStyle</code>
and manually register the copytype-to-subclass mapping with
<code>registerRemoteCopy</code>.</p>
<a href="listings/copyable-send.py" class="py-listing" skipLines="2">copyable-send.py</a>
<a href="listings/copyable-receive.py" class="py-listing" skipLines="2">copyable-receive.py</a>
<h2>Registering Copiers to serialize third-party classes</h2>
<p>If you wish to serialize instances of third-party classes that are out of
your control (or you simply want to avoid subclassing), you can register a
Copier to provide serialization mechanisms for those instances.</p>
<p>There are plenty of cases where it is difficult to arrange for all of the
data you send over the wire to be in the form of <code>Copyable</code>
subclasses. For example, you might have a codebase that produces a
deeply-nested data structure that contains instances of pre-existing classes.
Those classes are written by other people, and do not happen to inherit from
<code>Copyable</code>. Without Copiers, you would have to traverse the whole
structure, locate all instances of these non-<code>Copyable</code> classes,
and wrap them in some new <code>Copyable</code> subclass. Registering a
Copier for the third-party class is much easier.</p>
<p>The <code class="API" base="foolscap.copyable">registerCopier</code>
function is used to provide a "copier" for any given class. This copier is a
function that accepts an instance of the given class, and returns a
(copytype, state) tuple. For example<span class="footnote">many thanks to
Ricky Iacovou for the xmlrpclib.DateTime example</span>, the xmlrpclib module
provides a <code>DateTime</code> class, and you might have a data structure
that includes some instances of them:</p>
<pre class="python">
import xmlrpclib
from foolscap import registerCopier
def copy_DateTime(xd):
return ("_xmlrpclib_DateTime", {"value": xd.value})
registerCopier(xmlrpclib.DateTime, copy_DateTime)
</pre>
<p>This insures that any <code>xmlrpclib.DateTime</code> that is encountered
while serializing arguments or return values will be serialized with a
copytype of "_xmlrpclib_DateTime" and a state dictionary containing the
single "value" key. Even <code>DateTime</code> instances that appear
arbitrarily deep inside nested data structures will be serialized this way.
For example, one a method argument might be dictionary, and one of its keys
was a list, and that list could containe a <code>DateTime</code>
instance.</p>
<p>To deserialize this object, the receiving side needs to register a
corresponding deserializer. <code class="API"
base="foolscap.copyable">registerRemoteCopyFactory</code> is the
receiving-side parallel to <code>registerCopier</code>. It associates a
copytype with a function that will receive a state dictionary and is expected
to return a fully-formed instance. For example:</p>
<pre class="python">
import xmlrpclib
from foolscap import registerRemoteCopyFactory
def make_DateTime(state):
return xmlrpclib.DateTime(state["value"])
registerRemoteCopyFactory("_xmlrpclib_DateTime", make_DateTime)
</pre>
<p>Note that the "_xmlrpclib_DateTime" copytype <b>must</b> be the same for
both the copier and the RemoteCopyFactory, otherwise the receiving side will
be unable to locate the correct deserializer.</p>
<p>It is perfectly reasonable to include both of these function/registration
pairs in the same module, and import it in the code on both sides of the
wire. The examples describe the sending and receiving sides separately to
emphasize the fact that the recipient may be running completely different
code than the sender.</p>
<h2>Registering ICopyable adapters</h2>
<p>A slightly more generalized way to teach Foolscap about third-party
classes is to register an <code class="API"
base="foolscap.copyable">ICopyable</code> adapter for them, using the usual
(i.e. zope.interface) adapter-registration mechanism. The object that provide
<code>ICopyable</code> needs to implement two methods:
<code>getTypeToCopy</code> (which returns the copytype), and
<code>getStateToCopy</code>, which returns the state dictionary. Any object
which can be adapted to <code>ICopyable</code> can be serialized this
way.</p>
<p>On the receiving side, the copytype is looked up in the
<code>CopyableRegistry</code> to find a corresponding UnslicerFactory. The
<code>registerRemoteCopyUnslicerFactory</code> function accepts two
arguments: the copytype, and the unslicer factory to use. This unslicer
factory is simply a function that takes no arguments and returns a new
Unslicer. Each time an inbound message with the matching copytype is
received, ths unslicer factory is invoked to create an Unslicer that will be
responsible for the single instance described in the message. This Unslicer
must implement an interface described in the <a
href="specifications/pb.html">Unslicer specifications</a>.</p>
<h2>Registering ISlicer adapters</h2>
<p>The most generalized way to serialize classes is to register a whole
<code>ISlicer</code> adapter for them. The <code>ISlicer</code> gets complete
control over serialization: it can stall the production of tokens by
implementing a <code>slice</code> method that yields Deferreds instead of
basic objects. It can also interact with other objects while the target is
being serialized. As an extreme example, if you had a service that wanted to
migrate an open HTTP connection from one process to another, the
<code>ISlicer</code> could communication with a front-end load-balancing box
to redirect the connection to the new host. In this case, the slicer could
theoretically tell the load-balancer to pause the connection and assign it a
rendezvous number, then serialize this rendezvous number as a form of "claim
check" to the target process. The <code>IUnslicer</code> on the receiving end
could open a new listening port, then use the claim check to tell the
load-balancer to direct the connection to this new port. Likewise two
services running on the same host could conspire to pass open file
descriptors over a Foolscap connection (via an auxilliary unix-domain socket)
through suitable magic in the <code>ISlicer</code> and <code>IUnslicer</code>
on each end.</p>
<p>The Slicers and Unslicers are described in more detail in the <a
href="specifications/pb.html">specifications</a>.</p>
<p>Note that a <code>Copyable</code> with a copytype of "foo" is serialized
as the following token stream: OPEN, "copyable", "foo", [state dictionary..],
CLOSE. Any <code>ISlicer</code> adapter which wishes to match what
<code>Copyable</code> does needs to include the extra "copyable" opentype
string first.</p>
<p>Also note that using a custom Slicer introduces an opportunity to violate
serialization coherency. <code>Copyable</code> and Copiers transform the
original object into a state dictionary in one swell foop, not allowing any
other code to get control (and possibly mutate the object's state). If your
custom Slicer allows other code to get control during serialization, then the
object's state might be changed, and thus the serialized state dictionary
could wind up looking pretty weird.</p>
</body></html>

View File

@ -0,0 +1,41 @@
#! /usr/bin/python
import sys
from twisted.internet import reactor
from foolscap import RemoteCopy, Tub
# the receiving side defines the RemoteCopy
class RemoteUserRecord(RemoteCopy):
copytype = "unique-string-UserRecord" # this matches the sender
def __init__(self):
# note: our __init__ must take no arguments
pass
def setCopyableState(self, d):
self.name = d['name']
self.age = d['age']
self.shoe_size = "they wouldn't tell us"
def display(self):
print "Name:", self.name
print "Age:", self.age
print "Shoe Size:", self.shoe_size
def getRecord(rref, name):
d = rref.callRemote("getuser", name=name)
def _gotRecord(r):
# r is an instance of RemoteUserRecord
r.display()
reactor.stop()
d.addCallback(_gotRecord)
from foolscap import Tub
tub = Tub()
tub.startService()
d = tub.getReference(sys.argv[1])
d.addCallback(getRecord, "alice")
reactor.run()

View File

@ -0,0 +1,42 @@
#! /usr/bin/python
from twisted.internet import reactor
from foolscap import Copyable, Referenceable, Tub
# the sending side defines the Copyable
class UserRecord(Copyable):
# this class uses the default Copyable behavior
typeToCopy = "unique-string-UserRecord"
def __init__(self, name, age, shoe_size):
self.name = name
self.age = age
self.shoe_size = shoe_size # this is a secret
def getStateToCopy(self):
d = {}
d['name'] = self.name
d['age'] = self.age
# don't tell anyone our shoe size
return d
class Database(Referenceable):
def __init__(self):
self.users = {}
def addUser(self, name, age, shoe_size):
self.users[name] = UserRecord(name, age, shoe_size)
def remote_getuser(self, name):
return self.users[name]
db = Database()
db.addUser("alice", 34, 8)
db.addUser("bob", 25, 9)
tub = Tub()
tub.listenOn("tcp:12345")
tub.setLocation("localhost:12345")
url = tub.registerReference(db, "database")
print "the database is at:", url
tub.startService()
reactor.run()

View File

@ -0,0 +1,31 @@
#! /usr/bin/python
from twisted.internet import reactor
from foolscap import Tub
def gotError1(why):
print "unable to get the RemoteReference:", why
reactor.stop()
def gotError2(why):
print "unable to invoke the remote method:", why
reactor.stop()
def gotReference(remote):
print "got a RemoteReference"
print "asking it to add 1+2"
d = remote.callRemote("add", a=1, b=2)
d.addCallbacks(gotAnswer, gotError2)
def gotAnswer(answer):
print "the answer is", answer
reactor.stop()
tub = Tub()
d = tub.getReference("pbu://localhost:12345/math-service")
d.addCallbacks(gotReference, gotError1)
tub.startService()
reactor.run()

View File

@ -0,0 +1,20 @@
#! /usr/bin/python
from twisted.internet import reactor
from foolscap import Referenceable, UnauthenticatedTub
class MathServer(Referenceable):
def remote_add(self, a, b):
return a+b
def remote_subtract(self, a, b):
return a-b
myserver = MathServer()
tub = UnauthenticatedTub()
tub.listenOn("tcp:12345")
tub.setLocation("localhost:12345")
url = tub.registerReference(myserver, "math-service")
print "the object is available at:", url
tub.startService()
reactor.run()

View File

@ -0,0 +1,36 @@
#! /usr/bin/python
import sys
from twisted.internet import reactor
from foolscap import Tub
def gotError1(why):
print "unable to get the RemoteReference:", why
reactor.stop()
def gotError2(why):
print "unable to invoke the remote method:", why
reactor.stop()
def gotReference(remote):
print "got a RemoteReference"
print "asking it to add 1+2"
d = remote.callRemote("add", a=1, b=2)
d.addCallbacks(gotAnswer, gotError2)
def gotAnswer(answer):
print "the answer is", answer
reactor.stop()
if len(sys.argv) < 2:
print "Usage: pb2client.py URL"
sys.exit(1)
url = sys.argv[1]
tub = Tub()
d = tub.getReference(url)
d.addCallbacks(gotReference, gotError1)
tub.startService()
reactor.run()

View File

@ -0,0 +1,20 @@
#! /usr/bin/python
from twisted.internet import reactor
from foolscap import Referenceable, Tub
class MathServer(Referenceable):
def remote_add(self, a, b):
return a+b
def remote_subtract(self, a, b):
return a-b
myserver = MathServer()
tub = Tub(certFile="pb2server.pem")
tub.listenOn("tcp:12345")
tub.setLocation("localhost:12345")
url = tub.registerReference(myserver, "math-service")
print "the object is available at:", url
tub.startService()
reactor.run()

View File

@ -0,0 +1,44 @@
#! /usr/bin/python
from twisted.application import service
from twisted.internet import reactor
from foolscap import Referenceable, Tub
class Calculator(Referenceable):
def __init__(self):
self.stack = []
self.observers = []
def remote_addObserver(self, observer):
self.observers.append(observer)
def log(self, msg):
for o in self.observers:
o.callRemote("event", msg=msg)
def remote_removeObserver(self, observer):
self.observers.remove(observer)
def remote_push(self, num):
self.log("push(%d)" % num)
self.stack.append(num)
def remote_add(self):
self.log("add")
arg1, arg2 = self.stack.pop(), self.stack.pop()
self.stack.append(arg1 + arg2)
def remote_subtract(self):
self.log("subtract")
arg1, arg2 = self.stack.pop(), self.stack.pop()
self.stack.append(arg2 - arg1)
def remote_pop(self):
self.log("pop")
return self.stack.pop()
tub = Tub()
tub.listenOn("tcp:12345")
tub.setLocation("localhost:12345")
url = tub.registerReference(Calculator(), "calculator")
print "the object is available at:", url
application = service.Application("pb2calculator")
tub.setServiceParent(application)
if __name__ == '__main__':
raise RuntimeError("please run this as 'twistd -noy pb3calculator.py'")

View File

@ -0,0 +1,34 @@
#! /usr/bin/python
import sys
from twisted.internet import reactor
from foolscap import Referenceable, Tub
class Observer(Referenceable):
def remote_event(self, msg):
print "event:", msg
def printResult(number):
print "the result is", number
def gotError(err):
print "got an error:", err
def gotRemote(remote):
o = Observer()
d = remote.callRemote("addObserver", observer=o)
d.addCallback(lambda res: remote.callRemote("push", num=2))
d.addCallback(lambda res: remote.callRemote("push", num=3))
d.addCallback(lambda res: remote.callRemote("add"))
d.addCallback(lambda res: remote.callRemote("pop"))
d.addCallback(printResult)
d.addCallback(lambda res: remote.callRemote("removeObserver", observer=o))
d.addErrback(gotError)
d.addCallback(lambda res: reactor.stop())
return d
url = sys.argv[1]
tub = Tub()
d = tub.getReference(url)
d.addCallback(gotRemote)
tub.startService()
reactor.run()

View File

@ -0,0 +1,619 @@
-*- outline -*-
Reasonably independent newpb sub-tasks that need doing. Most important come
first.
* decide on a version negotiation scheme
Should be able to telnet into a PB server and find out that it is a PB
server. Pointing a PB client at an HTTP server (or an HTTP client at a PB
server) should result in an error, not a timeout. Implement in
banana.Banana.connectionMade().
desiderata:
negotiation should take place with regular banana sequences: don't invent a
new protocol that is only used at the start of the connection
Banana should be useable one-way, for storage or high-latency RPC (the mnet
folks want to create a method call, serialize it to a string, then encrypt
and forward it on to other nodes, sometimes storing it in relays along the
way if a node is offline for a few days). It should be easy for the layer
above Banana to feed it the results of what its negotiation would have been
(if it had actually used an interactive connection to its peer). Feeding the
same results to both sides should have them proceed as if they'd agreed to
those results.
negotiation should be flexible enough to be extended but still allow old
code to talk with new code. Magically predict every conceivable extension
and provide for it from the very first release :).
There are many levels to banana, all of which could be useful targets of
negotiation:
which basic tokens are in use? Is there a BOOLEAN token? a NONE token? Can
it accept a LONGINT token or is the target limited to 32-bit integers?
are there any variations in the basic Banana protocol being used? Could the
smaller-scope OPEN-counter decision be deferred until after the first
release and handled later with a compatibility negotiation flag?
What "base" OPEN sequences are known? 'unicode'? 'boolean'? 'dict'? This is
an overlap between expressing the capabilities of the host language, the
Banana implementation, and the needs of the application. How about
'instance', probably only used for StorageBanana?
What "top-level" OPEN sequences are known? PB stuff (like 'call', and
'your-reference')? Are there any variations or versions that need to be
known? We may add new functionality in the future, it might be useful for
one end to know whether this functionality is available or not. (the PB
'call' sequence could some day take numeric argument names to convey
positional parameters, a 'reference' sequence could take a string to
indicate globally-visible PB URLs, it could become possible to pass
target.remote_foo directly to a peer and have a callable RemoteMethod object
pop out the other side).
What "application-level" sequences are available? (Which RemoteInterface
classes are known and valid in 'call' sequences? Which RemoteCopy names are
valid for targets of the 'copy' sequence?). This is not necessarily within
the realm of Banana negotiation, but applications may need to negotiate this
sort of thing, and any disagreements will be manifested when Banana starts
raising Violations, so it may be useful to include it in the Banana-level
negotiation.
On the other hand, negotiation is only useful if one side is prepared to
accomodate a peer which cannot do some of the things it would prefer to use,
or if it wants to know about the incapabilities so it can report a useful
failure rather than have an obscure protocol-level error message pop up an
hour later. So negotiation isn't the only goal: simple capability awareness
is a useful lesser goal.
It kind of makes sense for the first object of a stream to be a negotiation
blob. We could make a new 'version' opentype, and declare that the contents
will be something simple and forever-after-parseable (like a dict, with heavy
constraints on the keys and values, all strings emitted in full).
DONE, at least the framework is in place. Uses HTTP-style header-block
exchange instead of banana sequences, with client-sends-first and
server-decides. This correctly handles PB-vs-HTTP, but requires a timeout to
detect oldpb clients vs newpb servers. No actual feature negotiation is
performed yet, because we still only have the one version of the code.
* connection initiation
** define PB URLs
[newcred is the most important part of this, the URL stuff can wait]
A URL defines an endpoint: a pb.Referenceable, with methods. Somewhere along
the way it defines a transport (tcp+host+port, or unix+path) and an object
reference (pathname). It might also define a RemoteInterface, or that might
be put off until we actually invoke a method.
URL = f("pb:", host, port, pathname)
d = pb.callRemoteURL(URL, ifacename, methodname, args)
probably give an actual RemoteInterface instead of just its name
a pb.RemoteReference claims to provide access to zero-or-more
RemoteInterfaces. You may choose which one you want to use when invoking
callRemote.
TODO: decide upon a syntax for URLs that refer to non-TCP transports
pb+foo://stuff, pby://stuff (for yURL-style self-authenticating names)
TODO: write the URL parser, implementing pb.getRemoteURL and pb.callRemoteURL
DONE: use a Tub/PBService instead
TODO: decide upon a calling convention for callRemote when specifying which
RemoteInterface is being used.
DONE, PB-URL is the way to go.
** more URLs
relative URLs (those without a host part) refer to objects on the same
Broker. Absolute URLs (those with a host part) refer to objects on other
Brokers.
SKIP, interesting but not really useful
** build/port pb.login: newcred for newpb
Leave cred work for Glyph.
<thomasvs> has some enhanced PB cred stuff (challenge/response, pb.Copyable
credentials, etc).
URL = pb.parseURL("pb://lothar.com:8789/users/warner/services/petmail",
IAuthorization)
URL = doFullLogin(URL, "warner", "x8yzzy")
URL.callRemote(methodname, args)
NOTDONE
* constrain ReferenceUnslicer properly
The schema can use a ReferenceConstraint to indicate that the object must be
a RemoteReference, and can also require that the remote object be capable of
handling a particular Interface.
This needs to be implemented. slicer.ReferenceUnslicer must somehow actually
ask the constraint about the incoming tokens.
An outstanding question is "what counts". The general idea is that
RemoteReferences come over the wire as a connection-scoped ID number and an
optional list of Interface names (strings and version numbers). In this case
it is the far end which asserts that its object can implement any given
Interface, and the receiving end just checks to see if the schema-imposed
required Interface is in the list.
This becomes more interesting when applied to local objects, or if a
constraint is created which asserts that its object is *something* (maybe a
RemoteReference, maybe a RemoteCopy) which implements a given Interface. In
this case, the incoming object could be an actual instance, but the class
name must be looked up in the unjellyableRegistry (and the class located, and
the __implements__ list consulted) before any of the object's tokens are
accepted.
* security TODOs:
** size constraints on the set-vocab sequence
* implement schema.maxSize()
In newpb, schemas serve two purposes:
a) make programs safer by reducing the surprises that can appear in their
arguments (i.e. factoring out argument-checking in a useful way)
b) remove memory-consumption DoS attacks by putting an upper bound on the
memory consumed by any particular message.
Each schema has a pair of methods named maxSize() and maxDepth() which
provide this upper bound. While the schema is in effect (say, during the
receipt of a particular named argument to a remotely-invokable method), at
most X bytes and Y slicer frames will be in use before either the object is
accepted and processed or the schema notes the violation and the object is
rejected (whereupon the temporary storage is released and all further bytes
in the rejected object are simply discarded). Strictly speaking, the number
returned by maxSize() is the largest string on the wire which has not yet
been rejected as violating the constraint, but it is also a reasonable
metric to describe how much internal storage must be used while processing
it. (To achieve greater accuracy would involve knowing exactly how large
each Python type is; not a sensible thing to attempt).
The idea is that someone who is worried about an attacker throwing a really
long string or an infinitely-nested list at them can ask the schema just what
exactly their current exposure is. The tradeoff between flexibility ("accept
any object whatsoever here") and exposure to DoS attack is then user-visible
and thus user-selectable.
To implement maxSize() for a basic schema (like a string), you simply need
to look at banana.xhtml and see how basic tokens are encoded (you will also
need to look at banana.py and see how deserialization is actually
implemented). For a schema.StringConstraint(32) (which accepts strings <= 32
characters in length), the largest serialized form that has not yet been
either accepted or rejected is:
64 bytes (header indicating 0x000000..0020 with lots of leading zeros)
+ 1 byte (STRING token)
+ 32 bytes (string contents)
= 97
If the header indicates a conforming length (<=32) then just after the 32nd
byte is received, the string object is created and handed to up the stack, so
the temporary storage tops out at 97. If someone is trying to spam us with a
million-character string, the serialized form would look like:
64 bytes (header indicating 1-million in hex, with leading zeros)
+ 1 byte (STRING token)
= 65
at which point the receive parser would check the constraint, decide that
1000000 > 32, and reject the remainder of the object.
So (with the exception of pass/fail maxSize values, see below), the following
should hold true:
schema.StringConstraint(32).maxSize() == 97
Now, schemas which represent containers have size limits that are the sum of
their contents, plus some overhead (and a stack level) for the container
itself. For example, a list of two small integers is represented in newbanana
as:
OPEN(list)
INT
INT
CLOSE()
which really looks like:
opencount-OPEN
len-STRING-"list"
value-INT
value-INT
opencount-CLOSE
This sequence takes at most:
opencount-OPEN: 64+1
len-STRING-"list": 64+1+1000 (opentypes are confined to be <= 1k long)
value-INT: 64+1
value-INT: 64+1
opencount-CLOSE: 64+1
or 5*(64+1)+1000 = 1325, or rather:
3*(64+1)+1000 + N*(IntConstraint().maxSize())
So ListConstraint.maxSize is computed by doing some math involving the
.maxSize value of the objects that go into it (the ListConstraint.constraint
attribute). This suggests a recursive algorithm. If any constraint is
unbounded (say a ListConstraint with no limit on the length of the list),
then maxSize() raises UnboundedSchema to indicate that there is no limit on
the size of a conforming string. Clearly, if any constraint is found to
include itself, UnboundedSchema must also be raised.
This is a loose upper bound. For example, one non-conforming input string
would be:
opencount-OPEN: 64+1
len-STRING-"x"*1000: 64+1+1000
The entire string would be accepted before checking to see which opentypes
were valid: the ListConstraint only accepts the "list" opentype and would
reject this string immediately after the 1000th "x" was received. So a
tighter upper bound would be 2*65+1000 = 1130.
In general, the bound is computed by walking through the deserialization
process and identifying the largest string that could make it past the
validity checks. There may be later checks that will reject the string, but
if it has not yet been rejected, then it still represents exposure for a
memory consumption DoS.
** pass/fail sizes
I started to think that it was necessary to have each constraint provide two
maxSize numbers: one of the largest sequence that could possibly be accepted
as valid, and a second which was the largest sequence that could be still
undecided. This would provide a more accurate upper bound because most
containers will respond to an invalid object by abandoning the rest of the
container: i.e. if the current active constraint is:
ListConstraint(StringConstraint(32), maxLength=30)
then the first thing that doesn't match the string constraint (say an
instance, or a number, or a 33-character string) will cause the ListUnslicer
to go into discard-everything mode. This makes a significant difference when
the per-item constraint allows opentypes, because the OPEN type (a string) is
constrained to 1k bytes. The item constraint probably imposes a much smaller
limit on the set of actual strings that would be accepted, so no
kilobyte-long opentype will possibly make it past that constraint. That means
there can only be one outstanding invalid object. So the worst case (maximal
length) string that has not yet been rejected would be something like:
OPEN(list)
validthing [0]
validthing [1]
...
validthing [n-1]
long-invalid-thing
because if the long-invalid thing had been received earlier, the entire list
would have been abandoned.
This suggests that the calculation for ListConstraint.maxSize() really needs
to be like
overhead
+(len-1)*itemConstraint.maxSize(valid)
+(1)*itemConstraint.maxSize(invalid)
I'm still not sure about this. I think it provides a significantly tighter
upper bound. The deserialization process itself does not try to achieve the
absolute minimal exposure (i.e., the opentype checker could take the set of
all known-valid open types, compute the maximum length, and then impose a
StringConstraint with that length instead of 1000), because it is, in
general, a inefficient hassle. There is a tradeoff between computational
efficiency and removing the slack in the maxSize bound, both in the
deserialization process (where the memory is actually consumed) and in
maxSize (where we estimate how much memory could be consumed).
Anyway, maxSize() and maxDepth() (which is easier: containers add 1 to the
maximum of the maxDepth values of their possible children) need to be
implemented for all the Constraint classes. There are some tests (disabled)
in test_schema.py for this code: those tests assert specific values for
maxSize. Those values are probably wrong, so they must be updated to match
however maxSize actually works.
* decide upon what the "Shared" constraint should mean
The idea of this one was to avoid some vulnerabilities by rejecting arbitrary
object graphs. Fundamentally Banana can represent most anything (just like
pickle), including objects that refer to each other in exciting loops and
whorls. There are two problems with this: it is hard to enforce a schema that
allows cycles in the object graph (indeed it is tricky to even describe one),
and the shared references could be used to temporarily violate a schema.
I think these might be fixable (the sample case is where one tuple is
referenced in two different places, each with a different constraint, but the
tuple is incomplete until some higher-level node in the graph has become
referenceable, so [maybe] the schema can't be enforced until somewhat after
the object has actually finished arriving).
However, Banana is aimed at two different use-cases. One is kind of a
replacement for pickle, where the goal is to allow arbitrary object graphs to
be serialized but have more control over the process (in particular we still
have an unjellyableRegistry to prevent arbitrary constructors from being
executed during deserialization). In this mode, a larger set of Unslicers are
available (for modules, bound methods, etc), and schemas may still be useful
but are not enforced by default.
PB will use the other mode, where the set of conveyable objects is much
smaller, and security is the primary goal (including putting limits on
resource consumption). Schemas are enforced by default, and all constraints
default to sensible size limits (strings to 1k, lists to [currently] 30
items). Because complex object graphs are not commonly transported across
process boundaries, the default is to not allow any Copyable object to be
referenced multiple times in the same serialization stream. The default is to
reject both cycles and shared references in the object graph, allowing only
strict trees, making life easier (and safer) for the remote methods which are
being given this object tree.
The "Shared" constraint is intended as a way to turn off this default
strictness and allow the object to be referenced multiple times. The
outstanding question is what this should really mean: must it be marked as
such on all places where it could be referenced, what is the scope of the
multiple-reference region (per- method-call, per-connection?), and finally
what should be done when the limit is violated. Currently Unslicers see an
Error object which they can respond to any way they please: the default
containers abandon the rest of their contents and hand an Error to their
parent, the MethodCallUnslicer returns an exception to the caller, etc. With
shared references, the first recipient sees a valid object, while the second
and later recipient sees an error.
* figure out Deferred errors for immutable containers
Somewhat related to the previous one. The now-classic example of an immutable
container which cannot be created right away is the object created by this
sequence:
t = ([],)
t[0].append((t,))
This serializes into (with implicit reference numbers on the left):
[0] OPEN(tuple)
[1] OPEN(list)
[2] OPEN(tuple)
[3] OPEN(reference #0)
CLOSE
CLOSE
CLOSE
In newbanana, the second TupleUnslicer cannot return a fully-formed tuple to
its parent (the ListUnslicer), because that tuple cannot be created until the
contents are all referenceable, and that cannot happen until the first
TupleUnslicer has completed. So the second TupleUnslicer returns a Deferred
instead of a tuple, and the ListUnslicer adds a callback which updates the
list's item when the tuple is complete.
The problem here is that of error handling. In general, if an exception is
raised (perhaps a protocol error, perhaps a schema violation) while an
Unslicer is active, that Unslicer is abandoned (all its remaining tokens are
discarded) and the parent gets an Error object. (the parent may give up too..
the basic Unslicers all behave this way, so any exception will cause
everything up to the RootUnslicer to go boom, and the RootUnslicer has the
option of dropping the connection altogether). When the error is noticed, the
Unslicer stack is queried to figure out what path was taken from the root of
the object graph to the site that had an error. This is really useful when
trying to figure out which exact object cause a SchemaViolation: rather than
being told a call trace or a description of the *object* which had a problem,
you get a description of the path to that object (the same series of
dereferences you'd use to print the object: obj.children[12].peer.foo.bar).
When references are allowed, these exceptions could occur after the original
object has been received, when that Deferred fires. There are two problems:
one is that the error path is now misleading, the other is that it might not
have been possible to enforce a schema because the object was incomplete.
The most important thing is to make sure that an exception that occurs while
the Deferred is being fired is caught properly and flunks the object just as
if the problem were caught synchronously. This may involve discarding an
otherwise complete object graph and blaming the problem on a node much closer
to the root than the one which really caused the failure.
* adaptive VOCAB compression
We want to let banana figure out a good set of strings to compress on its
own. In Banana.sendToken, keep a list of the last N strings that had to be
sent in full (i.e. they weren't in the table). If the string being sent
appears more than M times in that table, before we send the token, emit an
ADDVOCAB sequence, add a vocab entry for it, then send a numeric VOCAB token
instead of the string.
Make sure the vocab mapping is not used until the ADDVOCAB sequence has been
queued. Sending it inline should take care of this, but if for some reason we
need to push it on the top-level object queue, we need to make sure the vocab
table is not updated until it gets serialized. Queuing a VocabUpdate object,
which updates the table when it gets serialized, would take care of this. The
advantage of doing it inline is that later strings in the same object graph
would benefit from the mapping. The disadvantage is that the receiving
Unslicers must be prepared to deal with ADDVOCAB sequences at any time (so
really they have to be stripped out). This disadvantage goes away if ADDVOCAB
is a token instead of a sequence.
Reasonable starting values for N and M might be 30 and 3.
* write oldbanana compatibility code?
An oldbanana peer can be detected because the server side sends its dialect
list from connectionMade, and oldbanana lists are sent with OLDLIST tokens
(the explicit-length kind).
* add .describe methods to all Slicers
This involves setting an attribute between each yield call, to indicate what
part is about to be serialized.
* serialize remotely-callable methods?
It might be useful be able to do something like:
class Watcher(pb.Referenceable):
def remote_foo(self, args): blah
w = Watcher()
ref.callRemote("subscribe", w.remote_foo)
That would involve looking up the method and its parent object, reversing
the remote_*->* transformation, then sending a sequence which contained both
the object's RemoteReference and the appropriate method name.
It might also be useful to generalize this: passing a lambda expression to
the remote end could stash the callable in a local table and send a Callable
Reference to the other side. I can smell a good general-purpose object
classification framework here, but I haven't quite been able to nail it down
exactly.
* testing
** finish testing of LONGINT/LONGNEG
test_banana.InboundByteStream.testConstrainedInt needs implementation
** thoroughly test failure-handling at all points of in/out serialization
places where BananaError or Violation might be raised
sending side:
Slicer creation (schema pre-validation? no): no no
pre-validation is done before sending the object, Broker.callFinished,
RemoteReference.doCall
slicer creation is done in newSlicerFor
.slice (called in pushSlicer) ?
.slice.next raising Violation
.slice.next returning Deferrable when streaming isn't allowed
.sendToken (non-primitive token, can't happen)
.newSlicerFor (no ISlicer adapter)
top.childAborted
receiving side:
long header (>64 bytes)
checkToken (top.openerCheckToken)
checkToken (top.checkToken)
typebyte == LIST (oldbanana)
bad VOCAB key
too-long vocab key
bad FLOAT encoding
top.receiveClose
top.finish
top.reportViolation
oldtop.finish (in from handleViolation)
top.doOpen
top.start
plus all of these when discardCount != 0
OPENOPEN
send-side uses:
f = top.reportViolation(f)
receive-side should use it too (instead of f.raiseException)
** test failure-handing during callRemote argument serialization
** implement/test some streaming Slicers
** test producer Banana
* profiling/optimization
Several areas where I suspect performance issues but am unwilling to fix
them before having proof that there is a problem:
** Banana.produce
This is the main loop which creates outbound tokens. It is called once at
connectionMade() (after version negotiation) and thereafter is fired as the
result of a Deferred whose callback is triggered by a new item being pushed
on the output queue. It runs until the output queue is empty, or the
production process is paused (by a consumer who is full), or streaming is
enabled and one of the Slicers wants to pause.
Each pass through the loop either pushes a single token into the transport,
resulting in a number of short writes. We can do better than this by telling
the transport to buffer the individual writes and calling a flush() method
when we leave the loop. I think Itamar's new cprotocol work provides this
sort of hook, but it would be nice if there were a generalized Transport
interface so that Protocols could promise their transports that they will
use flush() when they've stopped writing for a little while.
Also, I want to be able to move produce() into C code. This means defining a
CSlicer in addition to the cprotocol stuff before. The goal is to be able to
slice a large tree of basic objects (lists, tuples, dicts, strings) without
surfacing into Python code at all, only coming "up for air" when we hit an
object type that we don't recognize as having a CSlicer available.
** Banana.handleData
The receive-tokenization process wants to be moved into C code. It's
definitely on the critical path, but it's ugly because it has to keep
calling into python code to handle each extracted token. Maybe there is a
way to have fast C code peek through the incoming buffers for token
boundaries, then give a list of offsets and lengths to the python code. The
b128 conversion should also happen in C. The data shouldn't be pulled out of
the input buffer until we've decided to accept it (i.e. the
memory-consumption guarantees that the schemas provide do not take any
transport-level buffering into account, and doing cprotocol tokenization
would represent memory that an attacker can make us spend without triggering
a schema violation). Itamar's CLineReceiver is a good example: you tokenize
a big buffer as much as you can, pass the tokens upstairs to Python code,
then hand the leftover tail to the next read() call. The tokenizer always
works on the concatenation of two buffers: the tail of the previous read()
and the complete contents of the current one.
** Unslicer.doOpen delegation
Unslicers form a stack, and each Unslicer gets to exert control over the way
that its descendents are deserialized. Most don't bother, they just delegate
the control methods up to the RootUnslicer. For example, doOpen() takes an
opentype and may return a new Unslicer to handle the new OPEN sequence. Most
of the time, each Unslicer delegates doOpen() to their parent, all the way
up the stack to the RootUnslicer who actually performs the UnslicerRegistry
lookup.
This provides an optimization point. In general, the Unslicer knows ahead of
time whether it cares to be involved in these methods or not (i.e. whether
it wants to pay attention to its children/descendants or not). So instead of
delegating all the time, we could just have a separate Opener stack.
Unslicers that care would be pushed on the Opener stack at the same time
they are pushed on the regular unslicer stack, likewise removed. The
doOpen() method would only be invoked on the top-most Opener, removing a lot
of method calls. (I think the math is something like turning
avg(treedepth)*avg(nodes) into avg(nodes)).
There are some other methods that are delegated in this way. open() is
related to doOpen(). setObject()/getObject() keep track of references to
shared objects and are typically only intercepted by a second-level object
which defines a "serialization scope" (like a single remote method call), as
well as connection-wide references (like pb.Referenceables) tracked by the
PBRootUnslicer. These would also be targets for optimization.
The fundamental reason for this optimization is that most Unslicers don't
care about these methods. There are far more uses of doOpen() (one per
object node) then there are changes to the desired behavior of doOpen().
** CUnslicer
Like CSlicer, the unslicing process wants to be able to be implemented (for
built-in objects) entirely in C. This means a CUnslicer "object" (a struct
full of function pointers), a table accessible from C that maps opentypes to
both CUnslicers and regular python-based Unslicers, and a CProtocol
tokenization code fed by a CTransport. It should be possible for the
python->C transition to occur in the reactor when it calls ctransport.doRead
python->and then not come back up to Python until Banana.receivedObject(),
at least for built-in types like dicts and strings.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,474 @@
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Foolscap Schemas</title>
<link href="stylesheet-unprocessed.css" type="text/css" rel="style" />
</head>
<body>
<h1>Foolscap Schemas</h1>
<em>NOTE! This is all preliminary and is more an exercise in semiconscious
protocol design than anything else. Do not believe this document. This
sentence is lying. So there.</em>
<h3>Existing <code>Constraint</code> classes</h3>
<table border="" width="">
<tr><td>class name</td><td>shortcut</td><td></td></tr>
<tr>
<td><code>Any()</code></td>
<td></td>
<td>accept anything</td>
</tr>
<tr>
<td><code>StringConstraint(maxLength=1000)</code></td>
<td><code>str</code></td>
<td>string of up to maxLength characters (maxLength=None means
unlimited), or a VOCAB sequence of any length</td>
</tr>
<tr>
<td><code>IntegerConstraint(maxBytes=-1)</code></td>
<td><code>int</code></td>
<td>integer. maxBytes=-1 means s_int32_t, =N means LONGINT which can be
expressed in N or fewer bytes (i.e. abs(num) &lt; 2**(8*maxBytes)),
=None means unlimited. NOTE: shortcut 'long' is like shortcut 'int' but
maxBytes=1024.</td>
</tr>
<tr>
<td><code>NumberConstraint(maxBytes=1024)</code></td>
<td><code>float</code></td>
<td>integer or float. Integers are limited by maxBytes as in
<code>IntegerConstraint</code>, floats are fixed size.</td>
</tr>
<tr>
<td><code>BooleanConstraint(value=None)</code></td>
<td><code>bool</code></td>
<td>True or False. If value=True, only accepts True. If value=False,
only accepts False. NOTE: value= is a very silly parameter.</td>
</tr>
<tr>
<td><code>InterfaceConstraint(iface)</code></td>
<td>Interface</td>
<td>TODO. UNSAFE. Accepts an instance which claims to implement the given
Interface. The shortcut is simply any Interface subclass.</td>
</tr>
<tr>
<td><code>ClassConstraint</code></td>
<td>Class</td>
<td>TODO. UNSAFE. Accepts an instance which claims to be of the given
class name. The shortcut is simply any (old-style) class object.</td>
</tr>
<tr>
<td><code>PolyConstraint(*alternatives)</code></td>
<td>(alt1, alt2)</td>
<td>Accepts any object which obeys at least one of the alternative
constraints provided. Implements a logical OR function of the given
constraints. Also known as <code>ChoiceOf</code>.</td>
</tr>
<tr>
<td><code>TupleConstraint(*elemConstraints)</code></td>
<td></td>
<td>Accepts a tuple of fixed length with elements that obey the given
constraints. Also known as <code>TupleOf</code>.</td>
</tr>
<tr>
<td><code>ListConstraint(elemConstraint, maxLength=30)</code></td>
<td></td>
<td>Accepts a list of up to maxLength items, each of which obeys the
element constraint provided. Also known as <code>ListOf</code>.</td>
</tr>
<tr>
<td><code>DictConstraint(keyConstraint, valueConstraint, maxKeys=30)</code></td>
<td></td>
<td>Accepts a dictionary of up to maxKeys items. Each key must obey
keyConstraint and each value must obey valueConstraint. Also known
as <code>DictOf</code>.</td>
</tr>
<tr>
<td><code>AttributeDictConstraint(*attrTuples, **kwargs)</code></td>
<td></td>
<td>Constrains dictionaries used to describe instance attributes, as used
by RemoteCopy. Each attrTuple is a pair of (attrname, constraint), used
to constraint individual named attributes. kwargs['attributes']
provides the same control. kwargs['ignoreUnknown'] is a boolean flag
which indicates that unknown attributes in inbound state should simply
be dropped. kwargs['acceptUnknown'] indicates that unknown attributes
should be accepted into the instance state dictionary.</td>
</tr>
<tr>
<td><code>RemoteMethodSchema(method=None, _response=None,
__options=[], **kwargs)</code></td>
<td></td>
<td>Constrains arguments and return value of a single remotely-invokable
method. If method= is provided, the <code>inspect</code> module is used
to extract constraints from the method itself (positional arguments are
not allowed, default values of keyword arguments provide constraints for
each argument, the results of running the method provide the return value
constraint). If not, most kwargs items provide constraints for method
arguments, _response provides a constraint for the return value.
__options and additional kwargs keys provide neato whiz-bang future
expansion possibilities.</td>
</tr>
<tr>
<td><code>Shared(constraint, refLimit=None)</code></td>
<td></td>
<td>TODO. Allows objects with refcounts no greater than refLimit (=None
means unlimited). Wraps another constraint, which the object must obey.
refLimit=1 rejects shared objects.</td>
</tr>
<tr>
<td><code>Optional(constraint, default)</code></td>
<td></td>
<td>TODO. Can be used to tag Copyable attributes or (maybe) method
arguments. Wraps another constraint. If an object is provided, it must
obey the constraint. If not provided, the default value will be given in
its place.</td>
</tr>
<tr>
<td><code>FailureConstraint()</code></td>
<td></td>
<td>Constrains the contents of a CopiedFailure.</td>
</tr>
</table>
<pre>
"""
RemoteReference objects should all be tagged with interfaces that they
implement, which point to representations of the method schemas. When a
remote method is called, Foolscap should look up the appropriate method and
serialize the argument list accordingly.
We plan to eliminate positional arguments, so local RemoteReferences use
their schema to convert callRemote calls with positional arguments to
all-keyword arguments before serialization.
Conversion to the appropriate version interface should be handled at the
application level. Eventually, with careful use of twisted.python.context,
we might be able to provide automated tools for helping application authors
automatically convert interface calls and isolate version-conversion code,
but that is probably pretty hard.
"""
class Attributes:
def __init__(self,*a,**k):
pass
X = Attributes(
('hello', str),
('goodbye', int),
# Allow the possibility of multiple or circular references. The default
# is to make multiple copies to avoid making the serializer do extra
# work.
('next', Shared(Narf)),
('asdf', ListOf(Narf, maxLength=30)),
('fdsa', (Narf, String(maxLength=5), int)),
('qqqq', DictOf(str, Narf, maxKeys=30)),
('larg', AttributeDict(('A', int),
('X', Number),
('Z', float))),
Optional("foo", str),
Optional("bar", str, default=None),
ignoreUnknown=True,
)
X = Attributes(
attributes={ 'hello': str, # this form doesn't allow Optional()
'goodbye': int,
},
Optional("foo", str), # but both can be used at once
ignoreUnknown=True)
class Narf(Remoteable):
# step 1
__schema__ = X
# step 2 (possibly - this loses information)
class schema:
hello = str
goodbye = int
class add:
x = Number
y = Number
__return__ = Copy(Number)
class getRemoteThingy:
fooID = Arg(WhateverID, default=None)
barID = Arg(WhateverID, default=None)
__return__ = Reference(Narf)
# step 3 - this is the only example that shows argument order, which we
# _do_ need in order to translate positional arguments to callRemote, so
# don't take the nested-classes example too seriously.
schema = """
int add (int a, int b)
"""
# Since the above schema could also be used for Formless, or possibly for
# World (for state) we can also do:
class remote_schema:
"""blah blah
"""
# You could even subclass that from the other one...
class remote_schema(schema):
dontUse = 'hello', 'goodbye'
def remote_add(self, x, y):
return x + y
def rejuvinate(self, deadPlayer):
return Reference(deadPlayer.bringToLife())
# "Remoteable" is a new concept - objects which may be method-published
# remotely _or_ copied remotely. The schema objects support both method
# / interface definitions and state definitions, so which one gets used
# can be defined by the sending side as to whether it sends a
# Copy(theRemoteable) or Reference(theRemoteable)
# (also, with methods that are explicitly published by a schema there is
# no longer technically any security need for the remote_ prefix, which,
# based on past experience can be kind of annoying if you want to
# provide the same methods locally and remotely)
# outstanding design choice - Referenceable and Copyable are subclasses
# of Remoteable, but should they restrict the possibility of sending it
# the other way or
def getPlayerInfo(self, playerID):
return CopyOf(self.players[playerID])
def getRemoteThingy(self, fooID, barID):
return ReferenceTo(self.players[selfPlayerID])
class RemoteNarf(Remoted):
__schema__ = X
# or, example of a difference between local and remote
class schema:
class getRemoteThingy:
__return__ = Reference(RemoteNarf)
class movementUpdate:
posX = int
posY = int
# No return value
__return__ = None
# Don't wait for the answer
__wait__ = False
# Feel free to send this over UDP
__reliable__ = False
# but send in order!
__ordered__ = True
# use priority queue / stream 3
__priority__ = 3
# allow full serialization of failures
__failure__ = FullFailure
# default: trivial failures, or str or int
__failure__ = ErrorMessage
# These options may imply different method names - e.g. '__wait__ =
# False' might imply that you can't use callRemote, you have to
# call 'sendRemote' instead... __reliable__ = False might be
# 'callRemoteUnreliable' (longer method name to make it less
# convenient to call by accident...)
## (and yes, donovan, we know that TypedInterface exists and we are going to
## use it. we're just screwing around with other syntaxes to see what about PB
## might be different.)
Common banana sequences:
A reference to a remote object.
(On the sending side: Referenceable or ReferenceTo(aRemoteable)
On the receiving side: RemoteReference)
('remote', reference-id, interface, version, interface, version, ...)
A call to a remote method:
('fastcall', request-id, reference-id,
method-name, 'arg-name', arg1, 'arg-name', arg2)
A call to a remote method with extra spicy metadata:
('call', request-id, reference-id, interface,
version, method-name, 'arg-name', arg1, 'arg-name', arg2)
Special hack: request-id of 0 means 'do not answer this call, do not
acknowledge', etc.
Answer to a method call:
('answer', request-id, response)
('error', request-id, response)
Decrement a reference incremented by 'remote' command:
('decref', reference-id)
Broker currently has 9 proto_ methods:
_version(vnum): accept a version number, compare to ours, reject if different
_didNotUnderstand(command): log command, maybe drop connection
_message(reqID, objID, message, answerRequired, netArgs, netKw):
_cachemessage (like _message but finds objID with self.cachedLocallyAs instead
of self.localObjectForID, used by RemoteCacheMethod and
RemoteCacheObserver)
look up objID, invoke it with .remoteMessageReceived(message, args),
send "answer"(reqID, results)
_answer(reqID, results): look up self.waitingForAnswers[reqID] and fire
callback with results
_error(reqID, failure): lookup waitingForAnswers, fire errback
_decref(objID): dec refcount of self.localObjects[objID]. Sent in
RemoteReference.__del__
_decache(objID): dec refcount of self.remotelyCachedObjects[objID]
_uncache(objID): remove obj from self.locallyCachedObjects[objID]
</pre>
<h2>stuff</h2>
<p>A RemoteReference/RemoteCopy (called a Remote for now) has a schema
attached to it. remote.callRemote(methodname, *args) does
schema.getMethodSchema(methodname) to obtain a MethodConstraint that
describes the individual method. This MethodConstraint (or MethodSchema) has
other attributes which are used by either end: what arguments are allowed
and/or expected, calling conventions (synchronous, in-order, priority, etc),
and how the return value should be constrained.</p>
<p>To use the Remote like a RemoteCopy ...</p>
<pre>
Remote:
.methods
.attributes
.getMethodSchema(methodname) -> MethodConstraint
.getAttributeSchema(attrname) -> a Constraint
XPCOM idl specifies methods and attributes (readonly, readwrite). A Remote
object which represented a distant XPCOM object would have a Schema that is
created by parsing the IDL. Its callRemote would do the appropriate
marshalling. Issue1: XPCOM lets methods have in/out/inout parameters.. these
must be detected and a wrapper generated. Issue2: what about attribute
set/get operations? Could add setRemote and getRemote for these.
---
Some of the schema questions come down to how PBRootSlicer should deal with
instances. The question is whether to treat the instance as a Referenceable
(copy-by-reference: create and transmit a reference number, which will be
turned into a RemoteReference on the other end), or as a Copyable
(copy-by-value: collect some instance state and send it as an instance).
This decision could be made by looking at what the instance inherits from:
if isinstance(obj, pb.Referenceable):
sendByReference(obj)
elif isinstance(obj, pb.Copyable):
sendByValue(obj)
else:
raise InsecureJellyError
or by what it can be adapted to:
r = IReferenceable(obj, None)
if r:
sendByReference(r)
else:
r = ICopyable(obj, None)
if r:
sendByValue(r)
else:
raise InsecureJellyError
The decision could also be influenced by the sending schema currently in
effect. Side A invokes a method on side B. A knows of a schema which states
that the 'foo' argument of this method should be a CopyableSpam, so it
requires the object be adaptable to ICopyableSpam (which is then copied by
value) tries to comply when that argument is serialized. B will enforce its
own schema. When B returns a result to A, the method-result schema on B's
side can influence how the return value is handled.
For bonus points, it may be possible to send the object as a combination of
these two. That may get pretty hard to follow, though.
</pre>
<h2>adapters and Referenceable/Copyable</h2>
<p>One possibility: rather than using a SlicerRegistry, use Adapters. The
ISliceable interface has one method: getSlicer(). Slicer.py would register
adapters for basic types (list, dict, etc) that would just return an
appropriate ListSlicer, etc. Instances which would have been pb.Copyable
subclasses in oldpb can still inherit from pb.Copyable, which now implements
ISliceable and produces a Slicer (opentype='instance') that calls
getStateToCopy() (although the subclass-__implements__ handling is now more
important than before). pb.Referenceable implements ISlicer and produces a
Slicer (opentype='reference'?) which (possibly) registers itself in the
broker and then sends the reference number (along with a schema if necessary
(and the other end wants them)).</p>
<p>Classes are also welcome to implement ISlicer themselves and produce
whatever clever (streaming?) Slicer objects they like.</p>
<p>On the receiving side, we still need a registry to provide reasonable
security. There are two registries. The first is the
RootUnslicer.openRegistry, and maps OPEN types to Unslicer factories. It is
used in doOpen().</p>
<p>The second registry should map opentype=instance class names to something
which can handle the instance contents. Should this be a replacement
Unslicer?</p>
</body> </html>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,115 @@
This file contains a collection of pb wishlist items, things it would be nice
to have in newpb.
* Log in, specifying desired interfaces
The server can provide several different interfaces, each of which inherit from
pb.IPerspective. The client can specify which of these interfaces that it
desires.
This change requires Jellyable interfaces, which in turn requires being able to
"register jelliers sanely" (exarkun 2004-05-10).
An example, in oldpb lingo:
# client
factory = PBClientFactory()
reactor.connectTCP('localhost', portNum, factory)
d = factory.login(creds, mind, IBusiness) # <-- API change
d.addCallbacks(connected, disaster)
# server
class IBusiness(pb.IPerspective):
def perspective_foo(self, bar):
"Does something"
class Business(pb.Avatar):
__implements__ = (IBusinessInterface, pb.Avatar.__implements__)
def perspective_foo(self, bar):
return bar
class Finance(pb.Avatar):
def perspective_cash(self):
"""do cash"""
class BizRealm:
__implements__ = portal.IRealm
def requestAvatar(self, avatarId, mind, *interfaces):
if IBusiness in interfaces:
return IBusiness, Business(avatarId, mind), lambda : None
elif pb.IPerspective in interfaces:
return pb.IPerspective, Finance(avatarId), lambda : None
else:
raise NotImplementedError
* data schemas in Zope3
http://svn.zope.org/Zope3/trunk/src/zope/schema/README.txt?rev=13888&view=auto
* objects that are both Referenceable and Copyable
-warner
I have a music player which can be controlled remotely via PB. There are
server-side objects (corresponding to songs or albums) which contain both
public attributes (song name, artist name) and private state (pathname to the
local .ogg file, whether or not it is present in the cache).
These objects may be sent to the remote end (the client) in response to
either a "what are you playing right now" query, or a "tell me about all of
your music" query. When they are sent down, the remote end should get an
object which contains the public attributes.
If the remote end sends that object back (in a "please play this song"
method), the local end (the server) should get back a reference to the
original Song or Album object.
This requires that the original object be serialized with both some public
state and a reference ID. The remote end must create a representation that
contains both pieces. That representation will be serialized with just the
reference ID.
Ideally this should be as easy to express as marking the source object as
implementing both pb.Referenceable and pb.Copyable, and the receiving object
as both a pb.RemoteReference and a pb.RemoteCopy.
Without this capability, my workaround is to manually assign a sequential
integer to each of these referenceable objects, then send a dict of the
public attributes and the index number. The recipient sends back the whole
dict, and the server end only pays attention to the .index attribute.
Note that I don't care about doing .callRemote on the remote object. This is
a case where it might make sense to split pb.Referenceable into two pieces,
one that talks about referenceability and the other that talks about
callablilty (pb.Callable?).
* both Callable and Copyable
buildbot: remote version of BuildStatus. When a build starts, the
buildmaster sends the current build to all status clients. It would be handy
for them to get some static data (name, number, reason, changes) about the
build at that time, plus a reference that can be used to query it again
later (through callRemote). This can be done manually, but requires knowing
all the places where a BuildStatus might be sent over the wire and wrapping
them. I suppose it could be done with a Slicer/Unslicer pair:
class CCSlicer:
def slice(self, obj):
yield obj
yield obj.getName()
yield obj.getNumber()
yield obj.getReason()
yield obj.getChanges()
class CCUnslicer:
def receiveChild(self, obj):
if state == 0: self.obj = makeRemoteReference(obj); state += 1; return
if state == 1: self.obj.name = obj; state += 1; return
if state == 2: self.obj.reason = obj; state += 1; return
if state == 3: self.obj.changes = obj; state += 1; return
plus some glue to make sure the object gets added to the per-Broker
references list: this makes sure the object is not sent (in full) twice, and
that the receiving side keeps a reference to the slaved version.

View File

@ -0,0 +1,933 @@
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Introduction to Foolscap</title>
<style src="stylesheet-unprocessed.css"></style>
</head>
<body>
<h1>Introduction to Foolscap</h1>
<h2>Introduction</h2>
<p>Suppose you find yourself in control of both ends of the wire: you have
two programs that need to talk to each other, and you get to use any protocol
you want. If you can think of your problem in terms of objects that need to
make method calls on each other, then chances are good that you can use the
Foolscap protocol rather than trying to shoehorn your needs into something
like HTTP, or implementing yet another RPC mechanism.</p>
<p>Foolscap is based upon a few central concepts:</p>
<ul>
<li><em>serialization</em>: taking fairly arbitrary objects and types,
turning them into a chunk of bytes, sending them over a wire, then
reconstituting them on the other end. By keeping careful track of object
ids, the serialized objects can contain references to other objects and the
remote copy will still be useful. </li>
<li><em>remote method calls</em>: doing something to a local proxy and
causing a method to get run on a distant object. The local proxy is called
a <code class="API" base="foolscap.referenceable">RemoteReference</code>,
and you <q>do something</q> by running its <code>.callRemote</code> method.
The distant object is called a <code class="API"
base="foolscap.referenceable">Referenceable</code>, and it has methods like
<code>remote_foo</code> that will be invoked.</li>
</ul>
<p>Foolscap is the descendant of Perspective Broker (which lived in the
twisted.spread package). For many years it was known as "newpb". A lot of the
API still has the name "PB" in it somewhere. These will probably go away
sooner or later.</p>
<p>A "foolscap" is a size of paper, probably measuring 17 by 13.5 inches. A
twisted foolscap of paper makes a good fool's cap. Also, "cap" makes me think
of capabilities, and Foolscap is a protocol to implement a distributed
object-capabilities model in python.</p>
<h2>Getting Started</h2>
<p>Any Foolscap application has at least two sides: one which hosts a
remotely-callable object, and another which calls (remotely) the methods of
that object. We'll start with a simple example that demostrates both ends.
Later, we'll add more features like RemoteInterface declarations, and
transferring object references.</p>
<p>The most common way to make an object with remotely-callable methods is to
subclass <code class="API"
base="foolscap.referenceable">Referenceable</code>. Let's create a simple
server which does basic arithmetic. You might use such a service to perform
difficult mathematical operations, like addition, on a remote machine which
is faster and more capable than your own<span class="footnote">although
really, if your client machine is too slow to perform this kind of math, it
is probably too slow to run python or use a network, so you should seriously
consider a hardware upgrade</span>.</p>
<pre class="python">
from foolscap import Referenceable
class MathServer(Referenceable):
def remote_add(self, a, b):
return a+b
def remote_subtract(self, a, b):
return a-b
def remote_sum(self, args):
total = 0
for a in args: total += a
return total
myserver = MathServer()
</pre>
<p>On the other end of the wire (which you might call the <q>client</q>
side), the code will have a <code class="API"
base="foolscap.referenceable">RemoteReference</code> to this object. The
<code>RemoteReference</code> has a method named <code class="API"
base="foolscap.referenceable.RemoteReference">callRemote</code> which you
will use to invoke the method. It always returns a Deferred, which will fire
with the result of the method. Assuming you've already acquired the
<code>RemoteReference</code>, you would invoke the method like this:</p>
<pre class="python">
def gotAnswer(result):
print "result is", result
def gotError(err):
print "error:", err
d = remote.callRemote("add", 1, 2)
d.addCallbacks(gotAnswer, gotError)
</pre>
<p>Ok, now how do you acquire that <code>RemoteReference</code>? How do you
make the <code>Referenceable</code> available to the outside world? For this,
we'll need to discuss the <q>Tub</q>, and the concept of a <q>PB URL</q>.</p>
<h2>Tubs: The Foolscap Service</h2>
<p>The <code class="API" base="foolscap.pb">Tub</code> is the container that
you use to publish <code>Referenceable</code>s, and is the middle-man you use
to access <code>Referenceable</code>s on other systems. It is known as the
<q>Tub</q>, since it provides similar naming and identification properties as
the <a href="http://www.erights.org/">E language</a>'s <q>Vat</q><span
class="footnote">but they do not provide quite the same insulation against
other objects as E's Vats do. In this sense, Tubs are leaky Vats.</span>. If
you want to make a <code>Referenceable</code> available to the world, you
create a Tub, tell it to listen on a TCP port, and then register the
<code>Referenceable</code> with it under a name of your choosing. If you want
to access a remote <code>Referenceable</code>, you create a Tub and ask it to
acquire a <code>RemoteReference</code> using that same name.</p>
<p>The <code>Tub</code> is a Twisted <code class="API"
base="twisted.application.service">Service</code> subclass, so you use it in
the same way: once you've created one, you attach it to a parent Service or
Application object. Once the top-level Application object has been started,
the Tub will start listening on any network ports you've requested. When the
Tub is shut down, it will stop listening and drop any connections it had
established since last startup. If you have no parent to attach it to, you
can use <code>startService</code> and <code>stopService</code> on the Tub
directly.</p>
<h3>Making your Tub remotely accessible</h3>
<p>To make any of your <code>Referenceable</code>s available, you must make
your Tub available. There are three parts: give it an identity, have it
listen on a port, and tell it the protocol/hostname/portnumber at which that
port is accessibly to the outside world.</p>
<p>In general, the Tub will generate its own identity, the <em>TubID</em>, by
creating an SSL private key certificate and hashing it into a suitably-long
random-looking string. This is the primary identifier of the Tub: everything
else is just a <em>location hint</em> that suggests how the Tub might be
reached. The fact that the TubID is tied to the private key allows PB URLs to
be <q>secure</q> references (meaning that no third party can cause you to
connect to the wrong reference). You can also create a Tub with a
pre-existing certificate, which is how Tubs can retain a persistent identity
over multiple executions.</p>
<p>You can also create an <code>UnauthenticatedTub</code>, which has an empty
TubID. Hosting and connecting to unauthenticated Tubs do not require the
pyOpenSSL library, but do not provide privacy, authentication, connection
redirection, or shared listening ports. The PB-URLs that point to
unauthenticated Tubs have a distinct form (starting with <code>pbu:</code>
instead of <code>pb:</code>) to make sure they are not mistaken for
authenticated Tubs. PB uses authenticated Tubs by default.</p>
<p>Having the Tub listen on a TCP port is as simple as calling <code
class="API" base="foolscap.pb.Tub">listenOn</code> with a <code class="API"
base="twisted.application">strports</code>-formatted port specification
string. The simplest such string would be <q>tcp:12345</q>, to listen on port
12345 on all interfaces. Using <q>tcp:12345:interface=127.0.0.1</q> would
cause it to only listen on the localhost interface, making it available only
to other processes on the same host. The <code>strports</code> module
provides many other possibilities.</p>
<p>The Tub needs to be told how it can be reached, so it knows what host and
port to put into the URLs it creates. This location is simply a string in the
format <q>host:port</q>, using the host name by which that TCP port you've
just opened can be reached. Foolscap cannot, in general, guess what this name
is, especially if there are NAT boxes or port-forwarding devices in the way.
If your machine is reachable directly over the internet as
<q>myhost.example.com</q>, then you could use something like this:</p>
<pre class="python">
from foolscap import Tub
tub = Tub()
tub.listenOn("tcp:12345") # start listening on TCP port 12345
tub.setLocation("myhost.example.com:12345")
</pre>
<h3>Registering the Referenceable</h3>
<p>Once the Tub has a Listener and a location, you can publish your
<code>Referenceable</code> to the entire world by picking a name and
registering it:</p>
<pre class="python">
url = tub.registerReference(myserver, "math-service")
</pre>
<p>This returns the <q>PB URL</q> for your <code>Referenceable</code>. Remote
systems will use this URL to access your newly-published object. The
registration just maps a per-Tub name to the <code>Referenceable</code>:
technically the same <code>Referenceable</code> could be published multiple
times, under different names, or even be published by multiple Tubs in the
same application. But in general, each program will have exactly one Tub, and
each object will be registered under only one name.</p>
<p>In this example (if we pretend the generated TubID was <q>ABCD</q>), the
URL returned by <code>registerReference</code> would be
<code>"pb://ABCD@myhost.example.com:12345/math-service"</code>.</p>
<p>If you do not provide a name, a random (and unguessable) name will be
generated for you. This is useful when you want to give access to your
<code>Referenceable</code> to someone specific, but do not want to make it
possible for someone else to acquire it by guessing the name.</p>
<p>To use an unauthenticated Tub instead, you would do the following:</p>
<pre class="python">
from foolscap import UnauthenticatedTub
tub = UnauthenticatedTub()
tub.listenOn("tcp:12345") # start listening on TCP port 12345
tub.setLocation("myhost.example.com:12345")
url = tub.registerReference(myserver, "math-service")
</pre>
<p>In this case, the URL would be
<code>"pbu://myhost.example.com:12345/math-service"</code>. The deterministic
nature of this form makes it slightly easier to throw together
quick-and-dirty Foolscap applications, since you only need to hard-code the
target host and port into the client side program. However any serious
application should just used the default authenticated form and use a full
URL as their starting point. Note that the URL can come from anywhere: typed
in by the user, retrieved from a web page, or hardcoded into the
application.</p>
<h4>Using a persistent certificate</h4>
<p>The Tub uses a TLS private-key certificate as the base of all its
cryptographic operations. If you don't give it one when you create the Tub,
it will generate a brand-new one.</p>
<p>The TubID is simply the hash of this certificate, so if you are writing an
application that should have a stable long-term identity, you will need to
insure that the Tub uses the same certificate every time your app starts. The
easiest way to do this is to pass the <code>certFile=</code> argument into
your <code>Tub()</code> constructor call. This argument provides a filename
where you want the Tub to store its certificate. The first time the Tub is
started (when this file does not exist), the Tub will generate a new
certificate and store it here. On subsequent invocations, the Tub will read
the earlier certificate from this location. Make sure this filename points to
a writable location, and that you pass the same filename to
<code>Tub()</code> each time.</p>
<h3>Retrieving a RemoteReference</h3>
<p>On the <q>client</q> side, you also need to create a Tub, although you
don't need to perform the (<code>listenOn</code>, <code>setLocation</code>,
<code>registerReference</code>) sequence unless you are also publishing
<code>Referenceable</code>s to the world. To acquire a reference to somebody
else's object, just use <code class="API"
base="foolscap.pb.Tub">getReference</code>:</p>
<pre class="python">
from foolscap import Tub
tub = Tub()
d = tub.getReference("pb://ABCD@myhost.example.com:12345/math-service")
def gotReference(remote):
print "Got the RemoteReference:", remote
def gotError(err):
print "error:", err
d.addCallbacks(gotReference, gotError)
</pre>
<p><code>getReference</code> returns a Deferred which will fire with a
<code>RemoteReference</code> that is connected to the remote
<code>Referenceable</code> named by the URL. It will use an existing
connection, if one is available, and it will return an existing
<code>RemoteReference</code>, it one has already been acquired.</p>
<h3>Complete example</h3>
<p>Here are two programs, one implementing the server side of our
remote-addition protocol, the other behaving as a client. This first example
uses an unauthenticated Tub so you don't have to manually copy a URL from the
server to the client. Both of these are standalone programs (you just run
them), but normally you would create an <code class="API"
base="twisted.application.service">Application</code> object and pass the
file to <code>twistd -noy</code>. An example of that usage will be provided
later.</p>
<a href="listings/pb1server.py" class="py-listing"
skipLines="2">pb1server.py</a>
<a href="listings/pb1client.py" class="py-listing"
skipLines="2">pb1client.py</a>
<pre class="shell">
% doc/listings/pb1server.py
the object is available at: pbu://localhost:12345/math-service
</pre>
<pre class="shell">
% doc/listings/pb1client.py
got a RemoteReference
asking it to add 1+2
the answer is 3
%
</pre>
<p>The second example uses authenticated Tubs. When running this example, you
must copy the URL printed by the server and provide it as an argument to the
client.</p>
<a href="listings/pb2server.py" class="py-listing"
skipLines="2">pb2server.py</a>
<a href="listings/pb2client.py" class="py-listing"
skipLines="2">pb2client.py</a>
<pre class="shell">
% doc/listings/pb2server.py
the object is available at: pb://abcd123@localhost:12345/math-service
</pre>
<pre class="shell">
% doc/listings/pb2client.py pb://abcd123@localhost:12345/math-service
got a RemoteReference
asking it to add 1+2
the answer is 3
%
</pre>
<h3>PB URLs</h3>
<p>In Foolscap, each world-accessible Referenceable has one or more URLs
which are <q>secure</q>, where we use the capability-security definition of
the term, meaning those URLs have the following properties:</p>
<ul>
<li>The only way to acquire the URL is either to get it from someone else
who already has it, or to be the person who published it in the first
place.</li>
<li>Only that original creator of the URL gets to determine which
Referenceable it will connect to. If your
<code>tub.getReference(url)</code> call succeeds, the Referenceable you
will be connected to will be the right one.</li>
</ul>
<p>To accomplish the first goal, PB URLs must be unguessable. You can
register the reference with a human-readable name if your intention is to
make it available to the world, but in general you will let
<code>tub.registerReference</code> generate a random name for you, preserving
the unguessability property.</p>
<p>To accomplish the second goal, the cryptographically-secure TubID is used
as the primary identifier, and the <q>location hints</q> are just that:
hints. If DNS has been subverted to point the hostname at a different
machine, or if a man-in-the-middle attack causes you to connect to the wrong
box, the TubID will not match the remote end, and the connection will be
dropped. These attacks can cause a denial-of-service, but they cannot cause
you to mistakenly connect to the wrong target.</p>
<p>Obviously this second property only holds if you use SSL. If you choose to
use unauthenticated Tubs, all security properties are lost.</p>
<p>The format of a PB URL, like
<code>pb://abcd123@example.com:5901,backup.example.com:8800/math-server</code>,
is as follows<span class="footnote">note that the PB URL uses the same format
as an <a href="http://www.waterken.com/dev/YURL/httpsy/">HTTPSY</a>
URL</span>:</p>
<ol>
<li>The literal string <code>pb://</code></li>
<li>The TubID (as a base32-encoded hash of the SSL certificate)</li>
<li>A literal <code>@</code> sign</li>
<li>A comma-separated list of <q>location hints</q>. Each is one of the
following:
<ul>
<li>TCP over IPv4 via DNS: <code>HOSTNAME:PORTNUM</code></li>
<li>TCP over IPv4 without DNS: <code>A.B.C.D:PORTNUM</code></li>
<li>TCP over IPv6: (TODO, maybe <code>tcp6:HOSTNAME:PORTNUM</code> ?</li>
<li>TCP over IPv6 w/o DNS: (TODO,
maybe <code>tcp6:[X:Y::Z]:PORTNUM</code></li>
<li>Unix-domain socket: (TODO)</li>
</ul>
Each location hint is attempted in turn. Servers can return a
<q>redirect</q>, which will cause the client to insert the provided
redirect targets into the hint list and start trying them before continuing
with the original list.</li>
<li>A literal <code>/</code> character</li>
<li>The reference's name</li>
</ol>
<p>(Unix-domain sockets are represented with only a single location hint, in
the format <code>pb://ABCD@unix/path/to/socket/NAME</code>, but this needs
some work)</p>
<p>PB URLs for unauthenticated Tubs, like
<code>pbu://example.com:8700/math-server</code>, are formatted as
follows:</p>
<ol>
<li>The literal string <code>pbu://</code></li>
<li>A single location hint</li>
<li>A literal <code>/</code> character</li>
<li>The reference's name</li>
</ol>
<h2>Clients vs Servers, Names and Capabilities</h2>
<p>It is worthwhile to point out that Foolscap is a symmetric protocol.
<code>Referenceable</code> instances can live on either side of a wire, and
the only difference between <q>client</q> and <q>server</q> is who publishes
the object and who initiates the network connection.</p>
<p>In any Foolscap-using system, the very first object exchanged must be
acquired with a <code>tub.getReference(url)</code> call<span
class="footnote">in fact, the very <em>very</em> first object exchanged is a
special implicit RemoteReference to the remote Tub itself, which implements
an internal protocol that includes a method named
<code>remote_getReference</code>. The <code>tub.getReference(url)</code> call
is turned into one step that connects to the remote Tub, and a second step
which invokes remotetub.callRemote("getReference", refname) on the
result</span>, which means it must have been published with a call to
<code>tub.registerReference(ref, name)</code>. After that, other objects can
be passed as an argument to (or a return value from) a remotely-invoked
method of that first object. Any suitable <code>Referenceable</code> object
that is passed over the wire will appear on the other side as a corresponding
<code>RemoteReference</code>. It is not necessary to
<code>registerReference</code> something to let it pass over the wire.</p>
<p>The converse of this property is thus: if you do <em>not</em>
<code>registerReference</code> a particular <code>Referenceable</code>, and
you do <em>not</em> give it to anyone else (by passing it in an argument to
somebody's remote method, or return it from one of your own), then nobody
else will be able to get access to that <code>Referenceable</code>. This
property means the <code>Referenceable</code> is a <q>capability</q>, as
holding a corresponding <code>RemoteReference</code> gives someone a power
that they cannot acquire in any other way<span class="footnote">of course,
the Foolscap connections must be secured with SSL (otherwise an eavesdropper
or man-in-the-middle could get access), and the registered name must be
unguessable (or someone else could acquire a reference), but both of these
are the default.</span></p>
<p>In the following example, the first program creates an RPN-style
<code>Calculator</code> object which responds to <q>push</q>, <q>pop</q>,
<q>add</q>, and <q>subtract</q> messages from the user. The user can also
register an <code>Observer</code>, to which the Calculator sends an
<code>event</code> message each time something happens to the calculator's
state. When you consider the <code>Calculator</code> object, the first
program is the server and the second program is the client. When you think
about the <code>Observer</code> object, the first program is a client and the
second program is the server. It also happens that the first program is
listening on a socket, while the second program initiated a network
connection to the first. It <em>also</em> happens that the first program
published an object under some well-known name, while the second program has
not published any objects. These are all independent properties.</p>
<p>Also note that the Calculator side of the example is implemented using a
<code class="API" base="twisted.application.service">Application</code>
object, which is the way you'd normally build a real-world application. You
therefore use <code>twistd</code> to launch the program. The User side is
written with the same <code>reactor.run()</code> style as the earlier
example.</p>
<p>The server registers the Calculator instance and prints the PBURL at which
it is listening. You need to pass this PBURL to the client program so it
knows how to contact the servre. If you have a modern version of Twisted (2.5
or later) and the right encryption libraries installed, you'll get an
authenticated Tub (for which the PBURL will start with "pb:" and will be
fairly long). If you don't, you'll get an unauthenticated Tub (with a
relatively short PBURL that starts with "pbu:").</p>
<a href="listings/pb3calculator.py" class="py-listing"
skipLines="2">pb3calculator.py</a>
<a href="listings/pb3user.py" class="py-listing"
skipLines="2">pb3user.py</a>
<pre class="shell">
% twistd -noy doc/listings/pb3calculator.py
15:46 PDT [-] Log opened.
15:46 PDT [-] twistd 2.4.0 (/usr/bin/python 2.4.4) starting up
15:46 PDT [-] reactor class: twisted.internet.selectreactor.SelectReactor
15:46 PDT [-] Loading doc/listings/pb3calculator.py...
15:46 PDT [-] the object is available at:
pb://5ojw4cv4u4d5cenxxekjukrogzytnhop@localhost:12345/calculator
15:46 PDT [-] Loaded.
15:46 PDT [-] foolscap.pb.Listener starting on 12345
15:46 PDT [-] Starting factory &lt;Listener at 0x4869c0f4 on tcp:12345
with tubs None&gt;
</pre>
<pre class="shell">
% doc/listings/pb3user.py \
pb://5ojw4cv4u4d5cenxxekjukrogzytnhop@localhost:12345/calculator
event: push(2)
event: push(3)
event: add
event: pop
the result is 5
%
</pre>
<h2>Invoking Methods, Method Arguments</h2>
<p>As you've probably already guessed, all the methods with names that begin
with <code>remote_</code> will be available to anyone who manages to acquire
a corresponding <code>RemoteReference</code>. <code>remote_foo</code> matches
a <code>ref.callRemote("foo")</code>, etc. This name lookup can be changed by
overriding <code>Referenceable</code> (or, perhaps more usefully,
implementing an <code class="API"
base="foolscap.ipb">IRemotelyCallable</code> adapter).</p>
<p>The arguments of a remote method may be passed as either positional
parameters (<code>foo(1,2)</code>), or as keyword args
(<code>foo(a=1,b=2)</code>), or a mixture of both. The usual python rules
about not duplicating parameters apply.</p>
<p>You can pass all sorts of normal objects to a remote method: strings,
numbers, tuples, lists, and dictionaries. The serialization of these objects
is handled by <a href="specifications/banana.xhtml">Banana</a>, which knows
how to convey arbitrary object graphs over the wire. Things like containers
which contain multiple references to the same object, and recursive
references (cycles in the object graph) are all handled correctly<span
class="footnote">you may not want to accept shared objects in your method
arguments, as it could lead to surprising behavior depending upon how you
have written your method. The <code class="API"
base="foolscap.schema">Shared</code> constraint will let you express this,
and is described in the <a href="#constraints">Constraints</a> section of
this document</span>.</p>
<p>Passing instances is handled specially. Foolscap will not send anything
over the wire that it does not know how to serialize, and (unlike the
standard <code>pickle</code> module) it will not make assumptions about how
to handle classes that that have not been explicitly marked as serializable.
This is for security, both for the sender (making you you don't pass anything
over the wire that you didn't intend to let out of your security perimeter),
and for the recipient (making sure outsiders aren't allowed to create
arbitrary instances inside your memory space, and therefore letting them run
somewhat arbitrary code inside <em>your</em> perimeter).</p>
<p>Sending <code>Referenceable</code>s is straightforward: they always appear
as a corresponding <code>RemoteReference</code> on the other side. You can
send the same <code>Referenceable</code> as many times as you like, and it
will always show up as the same <code>RemoteReference</code> instance. A
distributed reference count is maintained, so as long as the remote side
hasn't forgotten about the <code>RemoteReference</code>, the original
<code>Referenceable</code> will be kept alive.</p>
<p>Sending <code>RemoteReference</code>s fall into two categories. If you are
sending a <code>RemoteReference</code> back to the Tub that you got it from,
they will see their original <code>Referenceable</code>. If you send it to
some other Tub, they will (eventually) see a <code>RemoteReference</code> of
their own. This last feature is called an <q>introduction</q>, and has a few
additional requirements: see the <a href="#introductions">Introductions</a>
section of this document for details.</p>
<p>Sending instances of other classes requires that you tell Banana how they
should be serialized. <code>Referenceable</code> is good for
copy-by-reference semantics<span class="footnote">In fact, if all you want is
referenceability (and not callability), you can use <code class="API"
base="foolscap.referenceable">OnlyReferenceable</code>. Strictly speaking,
<code>Referenceable</code> is both <q>Referenceable</q> (meaning it is sent
over the wire using pass-by-reference semantics, and it survives a round
trip) and <q>Callable</q> (meaning you can invoke remote methods on it).
<code>Referenceable</code> should really be named <code>Callable</code>, but
the existing name has a lot of historical weight behind it.</span>. For
copy-by-value semantics, the easiest route is to subclass <code class="API"
base="foolscap.copyable">Copyable</code>. See the <a
href="#copyable">Copyable</a> section for details. Note that you can also
register an <code class="API" base="foolscap.copyable">ICopyable</code>
adapter on third-party classes to avoid subclassing. You will need to
register the <code>Copyable</code>'s name on the receiving end too, otherwise
Banana will not know how to unserialize the incoming data stream.</p>
<p>When returning a value from a remote method, you can do all these things,
plus two more. If you raise an exception, the caller's Deferred will have the
errback fired instead of the callback, with a <code class="API"
base="foolscap.call">CopiedFailure</code> instance that describes what went
wrong. The <code>CopiedFailure</code> is not quite as useful as a local <code
class="API" base="twisted.python.failure">Failure</code> object would be: to
send it over the wire, some contents are replaced with strings, and the
actual Exception object (<code>f.value</code>) is replaced with its string
representation. But you can still use it to find out what went wrong. The
<code>CopiedFailure</code> may reveal more information about the internals of
your program than you want: you can set the <code>unsafeTracebacks</code>
flag on the Tub to limit outgoing <code>CopiedFailure</code>s to contain only
the exception type (and none of the stack trace information that would reveal
lines of your source code to the remote end).</p>
<p>The other alternative is for your method to return a <code class="API"
base="twisted.internet.defer">Deferred</code>. If this happens, the caller
will not actually get a response until you fire that Deferred. This is useful
when the remote operation being requested cannot complete right away. The
caller's Deferred with fire with whatever value you eventually fire your own
Deferred with. If your Deferred is errbacked, their Deferred will be
errbacked with a <code>CopiedFailure</code>.</p>
<h2>Constraints and RemoteInterfaces</h2><a name="constraints" />
<p>One major feature introduced by Foolscap (relative to oldpb) is the
serialization <code class="API" base="foolscap.schema">Constraint</code>.
This lets you place limits on what kind of data you are willing to accept,
which enables safer distributed programming. Typically python uses <q>duck
typing</q>, wherein you usually just throw some arguments at the method and
see what happens. When you are less sure of the origin of those arguments,
you may want to be more circumspect. Enforcing type checking at the boundary
between your code and the outside world may make it safer to use duck typing
inside those boundaries. The type specifications also form a convenient
remote API reference you can publish for prospective clients of your
remotely-invokable service.</p>
<p>In addition, these Constraints are enforced on each token as it arrives
over the wire. This means that you can calculate a (small) upper bound on how
much received data your program will store before it decides to hang up on
the violator, minimizing your exposure to DoS attacks that involve sending
random junk at you.</p>
<p>There are three pieces you need to know about: Tokens, Constraints, and
RemoteInterfaces.</p>
<h3>Tokens</h3>
<p>The fundamental unit of serialization is the Banana Token. These are
thoroughly documented in the <a href="specifications/banana.xhtml">Banana
Specification</a>, but what you need to know here is that each piece of
non-container data, like a string or a number, is represented by a single
token. Containers (like lists and dictionaries) are represented by a special
OPEN token, followed by tokens for everything that is in the container,
followed by the CLOSE token. Everything Banana does is in terms of these
nested OPEN/stuff/stuff/CLOSE sequences of tokens.</p>
<p>Each token consists of a header, a type byte, and an optional body. The
header is always a base-128 number with a maximum of 64 digits, and the type
byte is always a single byte. The body (if present) has a length dictated by
the magnitude of the header.</p>
<p>The length-first token format means that the receiving system never has to
accept more than 65 bytes before it knows the type and size of the token, at
which point it can make a decision about accepting or rejecting the rest of
it.</p>
<h3>Constraints</h3>
<p>The schema <code>foolscap.schema</code> module has a variety of <code
class="API" base="foolscap.schema">Constraint</code> classes that can be
applied to incoming data. Most of them correspond to typical Python types,
e.g. <code class="API" base="foolscap.schema">ListOf</code> matches a list,
with a certain maximum length, and a child <code>Constraint</code> that gets
applied to the contents of the list. You can nest <code>Constraint</code>s in
this way to describe the <q>shape</q> of the object graph that you are
willing to accept.</p>
<p>At any given time, the receiving Banana protocol has single
<code>Constraint</code> object that it enforces against the inbound data
stream<span class="footnote">to be precise, each <code>Unslicer</code> on the
receive stack has a <code>Constraint</code>, and the idea is that all of them
get to pass judgement on the inbound token. A useful syntax to describe this
sort of thing is still being worked out.</span>.</p>
<h3>RemoteInterfaces</h3>
<p>The <code class="API"
base="foolscap.remoteinterface">RemoteInterface</code> is how you describe
your constraints. You can provide a constraint for each argument of each
method, as well as one for the return value. You can also specify addtional
flags on the methods. The convention (which is actually enforced by the code)
is to name <code>RemoteInterface</code> objects with an <q>RI</q> prefix,
like <code>RIFoo</code>.</p>
<p><code>RemoteInterfaces</code> are created and used a lot like the usual
<code>zope.interface</code>-style <code>Interface</code>. They look like
class definitions, inheriting from <code>RemoteInterface</code>. For each
method, the default value of each argument is used to create a
<code>Constraint</code> for that argument. Basic types (<code>int</code>,
<code>str</code>, <code>bool</code>) are converted into a
<code>Constraint</code> subclass (<code class="API"
base="foolscap.schema">IntegerConstraint</code>, <code class="API"
base="foolscap.schema">StringConstraint</code>, <code class="API"
base="foolscap.schema">BooleanConstraint</code>). You can also use
instances of other <code>Constraint</code> subclasses, like <code class="API"
base="foolscap.schema">ListOf</code> and <code class="API"
base="foolscap.schema">DictOf</code>. This <code>Constraint</code> will be
enforced against the value for the given argument. Unless you specify
otherwise, remote callers must match all the <code>Constraint</code>s you
specify, all arguments listed in the RemoteInterface must be present, and no
arguments outside that list will be accepted.</p>
<p>Note that, like zope.interface, these methods should <b>not</b> include
<q><code>self</code></q> in their argument list. This is because you are
documenting how <em>other</em> people invoke your methods. <code>self</code>
is an implementation detail. <code>RemoteInterface</code> will complain if
you forget.</p>
<p>The <q>methods</q> in a <code>RemoteInterface</code> should return a
single value with the same format as the default arguments: either a basic
type (<code>int</code>, <code>str</code>, etc) or a <code>Constraint</code>
subclass. This <code>Constraint</code> is enforced on the return value of the
method. If you are calling a method in somebody else's process, the argument
constraints will be applied as a courtesy (<q>be conservative in what you
send</q>), and the return value constraint will be applied to prevent the
server from doing evil things to you. If you are running a method on behalf
of a remote client, the argument constraints will be enforced to protect
<em>you</em>, while the return value constraint will be applied as a
courtesy.</p>
<p>Attempting to send a value that does not satisfy the Constraint will
result in a <code class="API" base="foolscap">Violation</code> exception
being raised.</p>
<p>You can also specify methods by defining attributes of the same name in
the <code>RemoteInterface</code> object. Each attribute value should be an
instance of <code class="API"
base="foolscap.schema">RemoteMethodSchema</code><span
class="footnote">although technically it can be any object which implements
the <code class="API" base="foolscap.schema">IRemoteMethodConstraint</code>
interface</span>. This approach is more flexible: there are some constraints
that are not easy to express with the default-argument syntax, and this is
the only way to set per-method flags. Note that all such method-defining
attributes must be set in the <code>RemoteInterface</code> body itself,
rather than being set on it after the fact (i.e. <code>RIFoo.doBar =
stuff</code>). This is required because the <code>RemoteInterface</code>
metaclass magic processes all of these attributes only once, immediately
after the <code>RemoteInterface</code> body has been evaluated.</p>
<p>The <code>RemoteInterface</code> <q>class</q> has a name. Normally this is
the fully-qualified classname<span
class="footnote"><code>RIFoo.__class__.__name__</code>, if
<code>RemoteInterface</code>s were actually classes, which they're
not</span>, like <code>package.module.RIFoo</code>. You can override this
name by setting a special <code>__remote_name__</code> attribute on the
<code>RemoteInterface</code> (again, in the body). This name is important
because it is externally visible: all <code>RemoteReference</code>s that
point at your <code>Referenceable</code>s will remember the name of the
<code>RemoteInterface</code>s it implements. This is what enables the
type-checking to be performed on both ends of the wire.</p>
<p>Here's an example:</p>
<pre class="python">
from foolscap import RemoteInterface, schema
class RIMath(RemoteInterface):
def add(a=int, b=int):
return int
# declare it with an attribute instead of a function definition
subtract = schema.RemoteMethodSchema(a=int, b=int, _response=int)
def sum(args=schema.ListOf(int)):
return int
</pre>
<h3>Using RemoteInterface</h3>
<p>To declare that your <code>Referenceable</code> responds to a particular
<code>RemoteInterface</code>, use the normal <code>implements()</code>
annotation:</p>
<pre class="python">
class MathServer(foolscap.Referenceable):
implements(RIMath)
def remote_add(self, a, b):
return a+b
def remote_subtract(self, a, b):
return a-b
def remote_sum(self, args):
total = 0
for a in args: total += a
return total
</pre>
<p>To enforce constraints everywhere, both sides will need to know about the
<code>RemoteInterface</code>, and both must know it by the same name. It is a
good idea to put the <code>RemoteInterface</code> in a common file that is
imported into the programs running on both sides. It is up to you to make
sure that both sides agree on the interface. Future versions of Foolscap may
implement some sort of checksum-verification or Interface-serialization as a
failsafe, but fundamentally the <code>RemoteInterface</code> that
<em>you</em> are using defines what <em>your</em> program is prepared to
handle. There is no difference between an old client accidentaly using a
different version of the RemoteInterface by mistake, and a malicious attacker
actively trying to confuse your code. The only promise that Foolscap can make
is that the constraints you provide in the RemoteInterface will be faithfully
applied to the incoming data stream, so that you don't need to do the type
checking yourself inside the method.</p>
<p>When making a remote method call, you use the <code>RemoteInterface</code>
to identify the method instead of a string. This scopes the method name to
the RemoteInterface:</p>
<pre class="python">
d = remote.callRemote(RIMath["add"], a=1, b=2)
# or
d = remote.callRemote(RIMath["add"], 1, 2)
</pre>
<h2>Pass-By-Copy</h2>
<p>You can pass (nearly) arbitrary instances over the wire. Foolscap knows
how to serialize all of Python's native data types already: numbers, strings,
unicode strings, booleans, lists, tuples, dictionaries, sets, and the None
object. You can teach it how to serialize instances of other types too.
Foolscap will not serialize (or deserialize) any class that you haven't
taught it about, both for security and because it refuses the temptation to
guess your intentions about how these unknown classes ought to be
serialized.</p>
<p>The simplest possible way to pass things by copy is demonstrated in the
following code fragment:</p>
<pre class="python">
from foolscap import Copyable, RemoteCopy
class MyPassByCopy(Copyable, RemoteCopy):
typeToCopy = copytype = "MyPassByCopy"
def __init__(self):
# RemoteCopy subclasses may not accept any __init__ arguments
pass
def setCopyableState(self, state):
self.__dict__ = state
</pre>
<p>If the code on both sides of the wire import this class, then any
instances of <code>MyPassByCopy</code> that are present in the arguments of a
remote method call (or returned as the result of a remote method call) will
be serialized and reconstituted into an equivalent instance on the other
side.</p>
<p>For more complicated things to do with pass-by-copy, see the documentation
on <a href="copyable.html">Copyable</a>. This explains the difference between
<code>Copyable</code> and <code>RemoteCopy</code>, how to control the
serialization and deserialization process, and how to arrange for
serialization of third-party classes that are not subclasses of
<code>Copyable</code>.</p>
<h2>Third-party References</h2><a name="introductions" />
<p>Another new feature of Foolscap is the ability to send
<code>RemoteReference</code>s to third parties. The classic scenario for this
is illustrated by the <a
href="http://www.erights.org/elib/capability/overview.html">three-party
Granovetter diagram</a>. One party (Alice) has RemoteReferences to two other
objects named Bob and Carol. She wants to share her reference to Carol with
Bob, by including it in a message she sends to Bob (i.e. by using it as an
argument when she invokes one of Bob's remote methods). The Foolscap code for
doing this would look like:</p>
<pre class="python">
bobref.callRemote("foo", intro=carolref)
</pre>
<p>When Bob receives this message (i.e. when his <code>remote_foo</code>
method is invoked), he will discover that he's holding a fully-functional
<code>RemoteReference</code> to the object named Carol<span
class="footnote">and if everyone involved is using authenticated Tubs, then
Foolscap offers a guarantee, in the cryptographic sense, that Bob will wind
up with a reference to the same object that Alice intended. The authenticated
PBURLs prevent DNS-spoofing and man-in-the-middle attacks.</span>. He can
start using this RemoteReference right away:</p>
<pre class="python">
class Bob(foolscap.Referenceable):
def remote_foo(self, intro):
self.carol = intro
carol.callRemote("howdy", msg="Pleased to meet you", you=intro)
return carol
</pre>
<p>If Bob sends this <code>RemoteReference</code> back to Alice, her method
will see the same <code>RemoteReference</code> that she sent to Bob. In this
example, Bob sends the reference by returning it from the original
<code>remote_foo</code> method call, but he could almost as easily send it in
a separate method call.</p>
<pre class="python">
class Alice(foolscap.Referenceable):
def start(self, carol):
self.carol = carol
d = self.bob.callRemote("foo", intro=carol)
d.addCallback(self.didFoo)
def didFoo(self, result):
assert result is self.carol # this will be true
</pre>
<p>Moreover, if Bob sends it back to <em>Carol</em> (completing the
three-party round trip), Carol will see it as her original
<code>Referenceable</code>.</p>
<pre class="python">
class Carol(foolscap.Referenceable):
def remote_howdy(self, msg, you):
assert you is self # this will be true
</pre>
<p>In addition to this, in the four-party introduction sequence as used by
the <a
href="http://www.erights.org/elib/equality/grant-matcher/index.html">Grant
Matcher Puzzle</a>, when a Referenceable is sent to the same destination
through multiple paths, the recipient will receive the same
<code>RemoteReference</code> object from both sides.</p>
<p>For a <code>RemoteReference</code> to be transferrable to third-parties in
this fashion, the original <code>Referenceable</code> must live in a Tub
which has a working listening port, and an established base URL. It is not
necessary for the Referenceable to have been published with
<code>registerReference</code> first: if it is sent over the wire before a
name has been associated with it, it will be registered under a new random
and unguessable name. The <code>RemoteReference</code> will contain the
resulting URL, enabling it to be sent to third parties.</p>
<p>When this introduction is made, the receiving system must establish a
connection with the Tub that holds the original Referenceable, and acquire
its own RemoteReference. These steps must take place before the remote method
can be invoked, and other method calls might arrive before they do. All
subsequent method calls are queued until the one that involved the
introduction is performed. Foolscap guarantees (by default) that the messages
sent to a given Referenceable will be delivered in the same order. In the
future there may be options to relax this guarantee, in exchange for higher
performance, reduced memory consumption, multiple priority queues, limited
latency, or other features. There might even be an option to turn off
introductions altogether.</p>
<p>Also note that enabling this capability means any of your communication
peers can make you create TCP connections to hosts and port numbers of their
choosing. The fact that those connections can only speak the Foolscap
protocol may reduce the security risk presented, but it still lets other
people be annoying.</p>
</body></html>

View File

@ -0,0 +1,29 @@
"""Foolscap"""
__version__ = "0.1.2+"
# here are the primary entry points
from foolscap.pb import Tub, UnauthenticatedTub, getRemoteURL_TCP
# names we import so that others can reach them as foolscap.foo
from foolscap.remoteinterface import RemoteInterface
from foolscap.referenceable import Referenceable, SturdyRef
from foolscap.copyable import Copyable, RemoteCopy, registerRemoteCopy
from foolscap.copyable import registerCopier, registerRemoteCopyFactory
from foolscap.ipb import DeadReferenceError
from foolscap.tokens import BananaError
from foolscap import schema # necessary for the adapter_hooks side-effect
# TODO: Violation?
# hush pyflakes
_unused = [
Tub, UnauthenticatedTub, getRemoteURL_TCP,
RemoteInterface,
Referenceable, SturdyRef,
Copyable, RemoteCopy, registerRemoteCopy,
registerCopier, registerRemoteCopyFactory,
DeadReferenceError,
BananaError,
schema,
]
del _unused

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,25 @@
# copied from the waterken.org Web-Calculus python implementation
def encode(bytes):
chars = ""
buffer = 0;
n = 0;
for b in bytes:
buffer = buffer << 8
buffer = buffer | ord(b)
n = n + 8
while n >= 5:
chars = chars + _encode((buffer >> (n - 5)) & 0x1F)
n = n - 5;
buffer = buffer & 0x1F # To quiet any warning from << operator
if n > 0:
buffer = buffer << (5 - n)
chars = chars + _encode(buffer & 0x1F)
return chars
def _encode(v):
if v < 26:
return chr(ord('a') + v)
else:
return chr(ord('2') + (v - 26))

View File

@ -0,0 +1,645 @@
# This module is responsible for the per-connection Broker object
import types
from itertools import count
from zope.interface import implements
from twisted.python import log
from twisted.internet import defer, error
from twisted.internet import interfaces as twinterfaces
from twisted.internet.protocol import connectionDone
from foolscap import banana, tokens, ipb, vocab
from foolscap import call, slicer, referenceable, copyable, remoteinterface
from foolscap.constraint import Any
from foolscap.tokens import Violation, BananaError
from foolscap.ipb import DeadReferenceError
from foolscap.slicers.root import RootSlicer, RootUnslicer
from foolscap.eventual import eventually
PBTopRegistry = {
("call",): call.CallUnslicer,
("answer",): call.AnswerUnslicer,
("error",): call.ErrorUnslicer,
}
PBOpenRegistry = {
('arguments',): call.ArgumentUnslicer,
('my-reference',): referenceable.ReferenceUnslicer,
('your-reference',): referenceable.YourReferenceUnslicer,
('their-reference',): referenceable.TheirReferenceUnslicer,
# ('copyable', classname) is handled inline, through the CopyableRegistry
}
class PBRootUnslicer(RootUnslicer):
# topRegistries defines what objects are allowed at the top-level
topRegistries = [PBTopRegistry]
# openRegistries defines what objects are allowed at the second level and
# below
openRegistries = [slicer.UnslicerRegistry, PBOpenRegistry]
logViolations = False
def checkToken(self, typebyte, size):
if typebyte != tokens.OPEN:
raise BananaError("top-level must be OPEN")
def openerCheckToken(self, typebyte, size, opentype):
if typebyte == tokens.STRING:
if len(opentype) == 0:
if size > self.maxIndexLength:
why = "first opentype STRING token is too long, %d>%d" % \
(size, self.maxIndexLength)
raise Violation(why)
if opentype == ("copyable",):
# TODO: this is silly, of course (should pre-compute maxlen)
maxlen = reduce(max,
[len(cname) \
for cname in copyable.CopyableRegistry.keys()]
)
if size > maxlen:
why = "copyable-classname token is too long, %d>%d" % \
(size, maxlen)
raise Violation(why)
elif typebyte == tokens.VOCAB:
return
else:
# TODO: hack for testing
raise Violation("index token 0x%02x not STRING or VOCAB" % \
ord(typebyte))
raise BananaError("index token 0x%02x not STRING or VOCAB" % \
ord(typebyte))
def open(self, opentype):
# used for lower-level objects, delegated up from childunslicer.open
assert len(self.protocol.receiveStack) > 1
if opentype[0] == 'copyable':
if len(opentype) > 1:
classname = opentype[1]
try:
factory = copyable.CopyableRegistry[classname]
except KeyError:
raise Violation("unknown RemoteCopy class '%s'" \
% classname)
child = factory()
child.broker = self.broker
return child
else:
return None # still need classname
for reg in self.openRegistries:
opener = reg.get(opentype)
if opener is not None:
child = opener()
break
else:
raise Violation("unknown OPEN type %s" % (opentype,))
child.broker = self.broker
return child
def doOpen(self, opentype):
child = RootUnslicer.doOpen(self, opentype)
if child:
child.broker = self.broker
return child
def reportViolation(self, f):
if self.logViolations:
print "hey, something failed:", f
return None # absorb the failure
def receiveChild(self, token, ready_deferred):
if isinstance(token, call.InboundDelivery):
assert ready_deferred is None
self.broker.scheduleCall(token)
class PBRootSlicer(RootSlicer):
slicerTable = {types.MethodType: referenceable.CallableSlicer,
types.FunctionType: referenceable.CallableSlicer,
}
def registerReference(self, refid, obj):
assert 0
def slicerForObject(self, obj):
# zope.interface doesn't do transitive adaptation, which is a shame
# because we want to let people register ICopyable adapters for
# third-party code, and there is an ICopyable->ISlicer adapter
# defined in copyable.py, but z.i won't do the transitive
# ThirdPartyClass -> ICopyable -> ISlicer
# so instead we manually do it here
s = tokens.ISlicer(obj, None)
if s:
return s
copier = copyable.ICopyable(obj, None)
if copier:
s = tokens.ISlicer(copier)
return s
return RootSlicer.slicerForObject(self, obj)
class RIBroker(remoteinterface.RemoteInterface):
def getReferenceByName(name=str):
"""If I have published an object by that name, return a reference to
it."""
# return Remote(interface=any)
return Any()
def decref(clid=int, count=int):
"""Release some references to my-reference 'clid'. I will return an
ack when the operation has completed."""
return None
def decgift(giftID=int, count=int):
"""Release some reference to a their-reference 'giftID' that was
sent earlier."""
return None
class Broker(banana.Banana, referenceable.Referenceable):
"""I manage a connection to a remote Broker.
@ivar tub: the L{Tub} which contains us
@ivar yourReferenceByCLID: maps your CLID to a RemoteReferenceData
#@ivar yourReferenceByName: maps a per-Tub name to a RemoteReferenceData
@ivar yourReferenceByURL: maps a global URL to a RemoteReferenceData
"""
implements(RIBroker)
slicerClass = PBRootSlicer
unslicerClass = PBRootUnslicer
unsafeTracebacks = True
requireSchema = False
disconnected = False
factory = None
tub = None
remote_broker = None
startingTLS = False
startedTLS = False
def __init__(self, params={},
keepaliveTimeout=None, disconnectTimeout=None):
banana.Banana.__init__(self, params)
self.keepaliveTimeout = keepaliveTimeout
self.disconnectTimeout = disconnectTimeout
self._banana_decision_version = params.get("banana-decision-version")
vocab_table_index = params.get('initial-vocab-table-index')
if vocab_table_index:
table = vocab.INITIAL_VOCAB_TABLES[vocab_table_index]
self.populateVocabTable(table)
self.initBroker()
def initBroker(self):
self.rootSlicer.broker = self
self.rootUnslicer.broker = self
# tracking Referenceables
# sending side uses these
self.nextCLID = count(1).next # 0 is for the broker
self.myReferenceByPUID = {} # maps ref.processUniqueID to a tracker
self.myReferenceByCLID = {} # maps CLID to a tracker
# receiving side uses these
self.yourReferenceByCLID = {}
self.yourReferenceByURL = {}
# tracking Gifts
self.nextGiftID = count().next
self.myGifts = {} # maps (broker,clid) to (rref, giftID, count)
self.myGiftsByGiftID = {} # maps giftID to (broker,clid)
# remote calls
# sending side uses these
self.nextReqID = count(1).next # 0 means "we don't want a response"
self.waitingForAnswers = {} # we wait for the other side to answer
self.disconnectWatchers = []
# receiving side uses these
self.inboundDeliveryQueue = []
self.activeLocalCalls = {} # the other side wants an answer from us
def setTub(self, tub):
assert ipb.ITub.providedBy(tub)
self.tub = tub
self.unsafeTracebacks = tub.unsafeTracebacks
if tub.debugBanana:
self.debugSend = True
self.debugReceive = True
def connectionMade(self):
banana.Banana.connectionMade(self)
# create the remote_broker object. We don't use the usual
# reference-counting mechanism here, because this is a synthetic
# object that lives forever.
tracker = referenceable.RemoteReferenceTracker(self, 0, None,
"RIBroker")
self.remote_broker = referenceable.RemoteReference(tracker)
# connectionTimedOut is called in response to the Banana layer detecting
# the lack of connection activity
def connectionTimedOut(self):
self.shutdown()
def shutdown(self):
self.disconnectWatchers = []
self.transport.loseConnection()
def connectionLost(self, why):
self.disconnected = True
self.remote_broker = None
self.abandonAllRequests(why)
# TODO: why reset all the tables to something useable? There may be
# outstanding RemoteReferences that point to us, but I don't see why
# that requires all these empty dictionaries.
self.myReferenceByPUID = {}
self.myReferenceByCLID = {}
self.yourReferenceByCLID = {}
self.yourReferenceByURL = {}
self.myGifts = {}
self.myGiftsByGiftID = {}
dw, self.disconnectWatchers = self.disconnectWatchers, []
for (cb,args,kwargs) in dw:
eventually(cb, *args, **kwargs)
banana.Banana.connectionLost(self, why)
if self.tub:
# TODO: remove the conditional. It is only here to accomodate
# some tests: test_pb.TestCall.testDisconnect[123]
self.tub.brokerDetached(self, why)
def notifyOnDisconnect(self, callback, *args, **kwargs):
marker = (callback, args, kwargs)
self.disconnectWatchers.append(marker)
return marker
def dontNotifyOnDisconnect(self, marker):
self.disconnectWatchers.remove(marker)
# methods to handle RemoteInterfaces
def getRemoteInterfaceByName(self, name):
return remoteinterface.RemoteInterfaceRegistry[name]
# methods to send my Referenceables to the other side
def getTrackerForMyReference(self, puid, obj):
tracker = self.myReferenceByPUID.get(puid)
if not tracker:
# need to add one
clid = self.nextCLID()
tracker = referenceable.ReferenceableTracker(self.tub,
obj, puid, clid)
self.myReferenceByPUID[puid] = tracker
self.myReferenceByCLID[clid] = tracker
return tracker
def getTrackerForMyCall(self, puid, obj):
# just like getTrackerForMyReference, but with a negative clid
tracker = self.myReferenceByPUID.get(puid)
if not tracker:
# need to add one
clid = self.nextCLID()
clid = -clid
tracker = referenceable.ReferenceableTracker(self.tub,
obj, puid, clid)
self.myReferenceByPUID[puid] = tracker
self.myReferenceByCLID[clid] = tracker
return tracker
# methods to handle inbound 'my-reference' sequences
def getTrackerForYourReference(self, clid, interfaceName=None, url=None):
"""The far end holds a Referenceable and has just sent us a reference
to it (expressed as a small integer). If this is a new reference,
they will give us an interface name too, and possibly a global URL
for it. Obtain a RemoteReference object (creating it if necessary) to
give to the local recipient.
The sender remembers that we hold a reference to their object. When
our RemoteReference goes away, we send a decref message to them, so
they can possibly free their object. """
assert type(interfaceName) is str or interfaceName is None
if url is not None:
assert type(url) is str
tracker = self.yourReferenceByCLID.get(clid)
if not tracker:
# TODO: translate interfaceNames to RemoteInterfaces
if clid >= 0:
trackerclass = referenceable.RemoteReferenceTracker
else:
trackerclass = referenceable.RemoteMethodReferenceTracker
tracker = trackerclass(self, clid, url, interfaceName)
self.yourReferenceByCLID[clid] = tracker
if url:
self.yourReferenceByURL[url] = tracker
return tracker
def freeYourReference(self, tracker, count):
# this is called when the RemoteReference is deleted
if not self.remote_broker: # tests do not set this up
self.freeYourReferenceTracker(None, tracker)
return
try:
rb = self.remote_broker
# TODO: do we want callRemoteOnly here? is there a way we can
# avoid wanting to know when the decref has completed? Only if we
# send the interface list and URL on every occurrence of the
# my-reference sequence. Either A) we use callRemote("decref")
# and wait until the ack to free the tracker, or B) we use
# callRemoteOnly("decref") and free the tracker right away. In
# case B, the far end has no way to know that we've just freed
# the tracker and will therefore forget about everything they
# told us (including the interface list), so they cannot
# accurately do anything special on the "first" send of this
# reference. Which means that if we do B, we must either send
# that extra information on every my-reference sequence, or do
# without it, or make it optional, or retrieve it separately, or
# something.
# rb.callRemoteOnly("decref", clid=tracker.clid, count=count)
# self.freeYourReferenceTracker('bogus', tracker)
# return
d = rb.callRemote("decref", clid=tracker.clid, count=count)
# if the connection was lost before we can get an ack, we're
# tearing this down anyway
d.addErrback(lambda f: f.trap(DeadReferenceError))
d.addErrback(lambda f: f.trap(error.ConnectionLost))
d.addErrback(lambda f: f.trap(error.ConnectionDone))
# once the ack comes back, or if we know we'll never get one,
# release the tracker
d.addCallback(self.freeYourReferenceTracker, tracker)
except:
log.msg("failure during freeRemoteReference")
log.err()
def freeYourReferenceTracker(self, res, tracker):
if tracker.received_count != 0:
return
if self.yourReferenceByCLID.has_key(tracker.clid):
del self.yourReferenceByCLID[tracker.clid]
if tracker.url and self.yourReferenceByURL.has_key(tracker.url):
del self.yourReferenceByURL[tracker.url]
# methods to handle inbound 'your-reference' sequences
def getMyReferenceByCLID(self, clid):
"""clid is the connection-local ID of the Referenceable the other
end is trying to invoke or point to. If it is a number, they want an
implicitly-created per-connection object that we sent to them at
some point in the past. If it is a string, they want an object that
was registered with our Factory.
"""
obj = None
assert isinstance(clid, (int, long))
if clid == 0:
return self
return self.myReferenceByCLID[clid].obj
# obj = IReferenceable(obj)
# assert isinstance(obj, pb.Referenceable)
# obj needs .getMethodSchema, which needs .getArgConstraint
def remote_decref(self, clid, count):
# invoked when the other side sends us a decref message
assert isinstance(clid, (int, long))
assert clid != 0
tracker = self.myReferenceByCLID[clid]
done = tracker.decref(count)
if done:
del self.myReferenceByPUID[tracker.puid]
del self.myReferenceByCLID[clid]
# methods to send RemoteReference 'gifts' to third-parties
def makeGift(self, rref):
# return the giftid
broker, clid = rref.tracker.broker, rref.tracker.clid
i = (broker, clid)
old = self.myGifts.get(i)
if old:
rref, giftID, count = old
self.myGifts[i] = (rref, giftID, count+1)
else:
giftID = self.nextGiftID()
self.myGiftsByGiftID[giftID] = i
self.myGifts[i] = (rref, giftID, 1)
return giftID
def remote_decgift(self, giftID, count):
broker, clid = self.myGiftsByGiftID[giftID]
rref, giftID, gift_count = self.myGifts[(broker, clid)]
gift_count -= count
if gift_count == 0:
del self.myGiftsByGiftID[giftID]
del self.myGifts[(broker, clid)]
else:
self.myGifts[(broker, clid)] = (rref, giftID, gift_count)
# methods to deal with URLs
def getYourReferenceByName(self, name):
d = self.remote_broker.callRemote("getReferenceByName", name=name)
return d
def remote_getReferenceByName(self, name):
return self.tub.getReferenceForName(name)
# remote-method-invocation methods, calling side, invoked by
# RemoteReference.callRemote and CallSlicer
def newRequestID(self):
if self.disconnected:
raise DeadReferenceError("Calling Stale Broker")
return self.nextReqID()
def addRequest(self, req):
req.broker = self
self.waitingForAnswers[req.reqID] = req
def removeRequest(self, req):
del self.waitingForAnswers[req.reqID]
def getRequest(self, reqID):
# invoked by AnswerUnslicer and ErrorUnslicer
try:
return self.waitingForAnswers[reqID]
except KeyError:
raise Violation("non-existent reqID '%d'" % reqID)
def abandonAllRequests(self, why):
for req in self.waitingForAnswers.values():
req.fail(why)
self.waitingForAnswers = {}
# target-side, invoked by CallUnslicer
def getRemoteInterfaceByName(self, riname):
# this lives in the broker because it ought to be per-connection
return remoteinterface.RemoteInterfaceRegistry[riname]
def getSchemaForMethod(self, rifaces, methodname):
# this lives in the Broker so it can override the resolution order,
# not that overlapping RemoteInterfaces should be allowed to happen
# all that often
for ri in rifaces:
m = ri.get(methodname)
if m:
return m
return None
def scheduleCall(self, delivery):
self.inboundDeliveryQueue.append(delivery)
eventually(self.doNextCall)
def doNextCall(self, ignored=None):
if not self.inboundDeliveryQueue:
return
nextCall = self.inboundDeliveryQueue[0]
if nextCall.isRunnable():
# remove it and arrange to run again soon
self.inboundDeliveryQueue.pop(0)
delivery = nextCall
if self.inboundDeliveryQueue:
eventually(self.doNextCall)
# now perform the actual delivery
d = defer.maybeDeferred(self._doCall, delivery)
d.addCallback(self._callFinished, delivery)
d.addErrback(self.callFailed, delivery.reqID, delivery)
return
# arrange to wake up when the next call becomes runnable
d = nextCall.whenRunnable()
d.addCallback(self.doNextCall)
def _doCall(self, delivery):
obj = delivery.obj
assert delivery.allargs.isReady()
args = delivery.allargs.args
kwargs = delivery.allargs.kwargs
for i in args + kwargs.values():
assert not isinstance(i, defer.Deferred)
if delivery.methodSchema:
# we asked about each argument on the way in, but ask again so
# they can look for missing arguments. TODO: see if we can remove
# the redundant per-argument checks.
delivery.methodSchema.checkAllArgs(args, kwargs, True)
# interesting case: if the method completes successfully, but
# our schema prohibits us from sending the result (perhaps the
# method returned an int but the schema insists upon a string).
# TODO: move the return-value schema check into
# Referenceable.doRemoteCall, so the exception's traceback will be
# attached to the object that caused it
if delivery.methodname is None:
assert callable(obj)
return obj(*args, **kwargs)
else:
obj = ipb.IRemotelyCallable(obj)
return obj.doRemoteCall(delivery.methodname, args, kwargs)
def _callFinished(self, res, delivery):
reqID = delivery.reqID
if reqID == 0:
return
methodSchema = delivery.methodSchema
assert self.activeLocalCalls[reqID]
if methodSchema:
try:
methodSchema.checkResults(res, False) # may raise Violation
except Violation, v:
v.prependLocation("in return value of %s.%s" %
(delivery.obj, methodSchema.name))
raise
answer = call.AnswerSlicer(reqID, res)
# once the answer has started transmitting, any exceptions must be
# logged and dropped, and not turned into an Error to be sent.
try:
self.send(answer)
# TODO: .send should return a Deferred that fires when the last
# byte has been queued, and we should delete the local note then
except:
log.err()
del self.activeLocalCalls[reqID]
def callFailed(self, f, reqID, delivery=None):
# this may be called either when an inbound schema is violated, or
# when the method is run and raises an exception. If a Violation is
# raised after we receive the reqID but before we've actually invoked
# the method, we are called by CallUnslicer.reportViolation and don't
# get a delivery= argument.
if delivery:
if (self.tub and self.tub.logLocalFailures) or not self.tub:
# the 'not self.tub' case is for unit tests
delivery.logFailure(f)
if reqID != 0:
assert self.activeLocalCalls[reqID]
self.send(call.ErrorSlicer(reqID, f))
del self.activeLocalCalls[reqID]
# this loopback stuff is based upon twisted.protocols.loopback, except that
# we use it for real, not just for testing. The IConsumer stuff hasn't been
# tested at all.
class _LoopbackAddress(object):
implements(twinterfaces.IAddress)
class LoopbackTransport(object):
# we always create these in pairs, with .peer pointing at each other
implements(twinterfaces.ITransport, twinterfaces.IConsumer)
producer = None
def __init__(self):
self.connected = True
def setPeer(self, peer):
self.peer = peer
def write(self, bytes):
eventually(self.peer.dataReceived, bytes)
def writeSequence(self, iovec):
self.write(''.join(iovec))
def dataReceived(self, data):
if self.connected:
self.protocol.dataReceived(data)
def loseConnection(self, _connDone=connectionDone):
if not self.connected:
return
self.connected = False
eventually(self.peer.connectionLost, _connDone)
eventually(self.protocol.connectionLost, _connDone)
def connectionLost(self, reason):
if not self.connected:
return
self.connected = False
self.protocol.connectionLost(reason)
def getPeer(self):
return _LoopbackAddress()
def getHost(self):
return _LoopbackAddress()
# IConsumer
def registerProducer(self, producer, streaming):
assert self.producer is None
self.producer = producer
self.streamingProducer = streaming
self._pollProducer()
def unregisterProducer(self):
assert self.producer is not None
self.producer = None
def _pollProducer(self):
if self.producer is not None and not self.streamingProducer:
self.producer.resumeProducing()
import debug
class LoggingBroker(debug.LoggingBananaMixin, Broker):
pass

View File

@ -0,0 +1,781 @@
from cStringIO import StringIO
from twisted.python import failure, log, reflect
from twisted.internet import defer
from foolscap import copyable, slicer, tokens
from foolscap.eventual import eventually
from foolscap.copyable import AttributeDictConstraint
from foolscap.constraint import StringConstraint
from foolscap.slicers.list import ListConstraint
from tokens import BananaError, Violation
class FailureConstraint(AttributeDictConstraint):
opentypes = [("copyable", "twisted.python.failure.Failure")]
name = "FailureConstraint"
klass = failure.Failure
def __init__(self):
attrs = [('type', StringConstraint(200)),
('value', StringConstraint(1000)),
('traceback', StringConstraint(2000)),
('parents', ListConstraint(StringConstraint(200))),
]
AttributeDictConstraint.__init__(self, *attrs)
def checkObject(self, obj, inbound):
if not isinstance(obj, self.klass):
raise Violation("is not an instance of %s" % self.klass)
class PendingRequest(object):
# this object is a local representation of a message we have sent to
# someone else, that will be executed on their end.
active = True
methodName = None # for debugging
def __init__(self, reqID, rref=None):
self.reqID = reqID
self.rref = rref # keep it alive
self.broker = None # if set, the broker knows about us
self.deferred = defer.Deferred()
self.constraint = None # this constrains the results
def setConstraint(self, constraint):
self.constraint = constraint
def complete(self, res):
if self.broker:
self.broker.removeRequest(self)
if self.active:
self.active = False
self.deferred.callback(res)
else:
log.msg("PendingRequest.complete called on an inactive request")
def fail(self, why):
if self.active:
if self.broker:
self.broker.removeRequest(self)
self.active = False
self.failure = why
if (self.broker and
self.broker.tub and
self.broker.tub.logRemoteFailures):
log.msg("an outbound callRemote (that we sent to someone "
"else) failed on the far end")
log.msg(" reqID=%d, rref=%s, methname=%s"
% (self.reqID, self.rref, self.methodName))
stack = why.getTraceback()
# TODO: include the first few letters of the remote tubID in
# this REMOTE tag
stack = "REMOTE: " + stack.replace("\n", "\nREMOTE: ")
log.msg(" the failure was:")
log.msg(stack)
self.deferred.errback(why)
else:
log.msg("multiple failures")
log.msg("first one was:", self.failure)
log.msg("this one was:", why)
log.err("multiple failures indicate a problem")
class ArgumentSlicer(slicer.ScopedSlicer):
opentype = ('arguments',)
def __init__(self, args, kwargs):
slicer.ScopedSlicer.__init__(self, None)
self.args = args
self.kwargs = kwargs
self.which = ""
def sliceBody(self, streamable, banana):
yield len(self.args)
for i,arg in enumerate(self.args):
self.which = "arg[%d]" % i
yield arg
keys = self.kwargs.keys()
keys.sort()
for argname in keys:
self.which = "arg[%s]" % argname
yield argname
yield self.kwargs[argname]
def describe(self):
return "<%s>" % self.which
class CallSlicer(slicer.ScopedSlicer):
opentype = ('call',)
def __init__(self, reqID, clid, methodname, args, kwargs):
slicer.ScopedSlicer.__init__(self, None)
self.reqID = reqID
self.clid = clid
self.methodname = methodname
self.args = args
self.kwargs = kwargs
def sliceBody(self, streamable, banana):
yield self.reqID
yield self.clid
yield self.methodname
yield ArgumentSlicer(self.args, self.kwargs)
def describe(self):
return "<call-%s-%s-%s>" % (self.reqID, self.clid, self.methodname)
class InboundDelivery:
"""An inbound message that has not yet been delivered.
This is created when a 'call' sequence has finished being received. The
Broker will add it to a queue. The delivery at the head of the queue is
serviced when all of its arguments have been resolved.
The only way that the arguments might not all be available is if one of
the Unslicers which created them has provided a 'ready_deferred' along
with the prospective object. The only standard Unslicer which does this
is the TheirReferenceUnslicer, which handles introductions. (custom
Unslicers might also provide a ready_deferred, for example a URL
slicer/unslicer pair for which the receiving end fetches the target of
the URL as its value, or a UnixFD slicer/unslicer that had to wait for a
side-channel unix-domain socket to finish transferring control over the
FD to the recipient before being ready).
Most Unslicers refuse to accept unready objects as their children (most
implementations of receiveChild() do 'assert ready_deferred is None').
The CallUnslicer is fairly unique in not rejecting such objects.
We do require, however, that all of the arguments be at least
referenceable. This is not generally a problem: the only time an
unslicer's receiveChild() can get a non-referenceable object (represented
by a Deferred) is if that unslicer is participating in a reference cycle
that has not yet completed, and CallUnslicers only live at the top level,
above any cycles.
"""
def __init__(self, reqID, obj,
interface, methodname, methodSchema,
allargs):
self.reqID = reqID
self.obj = obj
self.interface = interface
self.methodname = methodname
self.methodSchema = methodSchema
self.allargs = allargs
if allargs.isReady():
self.runnable = True
self.runnable = False
def isRunnable(self):
if self.allargs.isReady():
return True
return False
def whenRunnable(self):
if self.allargs.isReady():
return defer.succeed(self)
d = self.allargs.whenReady()
d.addCallback(lambda res: self)
return d
def logFailure(self, f):
# called if tub.logLocalFailures is True
log.msg("an inbound callRemote that we executed (on behalf of "
"someone else) failed")
log.msg(" reqID=%d, rref=%s, methname=%s" %
(self.reqID, self.obj, self.methodname))
log.msg(" args=%s" % (self.allargs.args,))
log.msg(" kwargs=%s" % (self.allargs.kwargs,))
stack = f.getTraceback()
# TODO: trim stack to everything below Broker._doCall
stack = "LOCAL: " + stack.replace("\n", "\nLOCAL: ")
log.msg(" the failure was:")
log.msg(stack)
class ArgumentUnslicer(slicer.ScopedUnslicer):
methodSchema = None
def setConstraint(self, methodSchema):
self.methodSchema = methodSchema
def start(self, count):
self.numargs = None
self.args = []
self.kwargs = {}
self.argname = None
self.argConstraint = None
self.num_unreferenceable_children = 0
self.num_unready_children = 0
self.closed = False
def checkToken(self, typebyte, size):
if self.numargs is None:
# waiting for positional-arg count
if typebyte != tokens.INT:
raise BananaError("posarg count must be an INT")
return
if len(self.args) < self.numargs:
# waiting for a positional arg
if self.argConstraint:
self.argConstraint.checkToken(typebyte, size)
return
if self.argname is None:
# waiting for the name of a keyword arg
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("kwarg name must be a STRING")
# TODO: limit to longest argument name of the method?
return
# waiting for the value of a kwarg
if self.argConstraint:
self.argConstraint.checkToken(typebyte, size)
def doOpen(self, opentype):
if self.argConstraint:
self.argConstraint.checkOpentype(opentype)
unslicer = self.open(opentype)
if unslicer:
if self.argConstraint:
unslicer.setConstraint(self.argConstraint)
return unslicer
def receiveChild(self, token, ready_deferred=None):
if self.numargs is None:
# this token is the number of positional arguments
assert isinstance(token, int)
assert ready_deferred is None
self.numargs = token
if self.numargs:
ms = self.methodSchema
if ms:
accept, self.argConstraint = \
ms.getPositionalArgConstraint(0)
assert accept
return
if len(self.args) < self.numargs:
# this token is a positional argument
argvalue = token
argpos = len(self.args)
self.args.append(argvalue)
if isinstance(argvalue, defer.Deferred):
self.num_unreferenceable_children += 1
argvalue.addCallback(self.updateChild, argpos)
argvalue.addErrback(self.explode)
if ready_deferred:
self.num_unready_children += 1
ready_deferred.addCallback(self.childReady)
if len(self.args) < self.numargs:
# more to come
ms = self.methodSchema
if ms:
nextargnum = len(self.args)
accept, self.argConstraint = \
ms.getPositionalArgConstraint(nextargnum)
assert accept
return
if self.argname is None:
# this token is the name of a keyword argument
self.argname = token
# if the argname is invalid, this may raise Violation
ms = self.methodSchema
if ms:
accept, self.argConstraint = \
ms.getKeywordArgConstraint(self.argname,
self.numargs,
self.kwargs.keys())
assert accept
return
# this token is the value of a keyword argument
argvalue = token
self.kwargs[self.argname] = argvalue
if isinstance(argvalue, defer.Deferred):
self.num_unreferenceable_children += 1
argvalue.addCallback(self.updateChild, self.argname)
argvalue.addErrback(self.explode)
if ready_deferred:
self.num_unready_children += 1
ready_deferred.addCallback(self.childReady)
self.argname = None
return
def updateChild(self, obj, which):
# one of our arguments has just now become referenceable. Normal
# types can't trigger this (since the arguments to a method form a
# top-level serialization domain), but special Unslicers might. For
# example, the Gift unslicer will eventually provide us with a
# RemoteReference, but for now all we get is a Deferred as a
# placeholder.
if isinstance(which, int):
self.args[which] = obj
else:
self.kwargs[which] = obj
self.num_unreferenceable_children -= 1
self.checkComplete()
return obj
def childReady(self, obj):
self.num_unready_children -= 1
self.checkComplete()
return obj
def checkComplete(self):
# this is called each time one of our children gets updated or
# becomes ready (like when a Gift is finally resolved)
if not self.closed:
return
if self.num_unreferenceable_children:
return
if self.num_unready_children:
return
# yup, we're done. Notify anyone who is still waiting
for d in self.watchers:
eventually(d.callback, self)
del self.watchers
def receiveClose(self):
if (self.numargs is None or
len(self.args) < self.numargs or
self.argname is not None):
raise BananaError("'arguments' sequence ended too early")
self.closed = True
self.watchers = []
return self, None
def isReady(self):
assert self.closed
if self.num_unreferenceable_children:
return False
if self.num_unready_children:
return False
return True
def whenReady(self):
assert self.closed
if self.isReady():
return defer.succeed(self)
d = defer.Deferred()
self.watchers.append(d)
return d
def describe(self):
s = "<arguments"
if self.numargs is not None:
if len(self.args) < self.numargs:
s += " arg[%d]" % len(self.args)
else:
if self.argname is not None:
s += " arg[%s]" % self.argname
else:
s += " arg[?]"
if self.closed:
if self.isReady():
# waiting to be delivered
s += " ready"
else:
s += " waiting"
s += ">"
return s
class CallUnslicer(slicer.ScopedUnslicer):
def start(self, count):
# start=0:reqID, 1:objID, 2:methodname, 3: arguments
self.stage = 0
self.reqID = None
self.obj = None
self.interface = None
self.methodname = None
self.methodSchema = None # will be a MethodArgumentsConstraint
def checkToken(self, typebyte, size):
# TODO: limit strings by returning a number instead of None
if self.stage == 0:
if typebyte != tokens.INT:
raise BananaError("request ID must be an INT")
elif self.stage == 1:
if typebyte not in (tokens.INT, tokens.NEG):
raise BananaError("object ID must be an INT/NEG")
elif self.stage == 2:
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("method name must be a STRING")
# TODO: limit to longest method name of self.obj in the interface
elif self.stage == 3:
if typebyte != tokens.OPEN:
raise BananaError("arguments must be an 'arguments' sequence")
else:
raise BananaError("too many objects given to CallUnslicer")
def doOpen(self, opentype):
# checkToken insures that this can only happen when we're receiving
# an arguments object, so we don't have to bother checking self.stage
assert self.stage == 3
unslicer = self.open(opentype)
if self.methodSchema:
unslicer.setConstraint(self.methodSchema)
return unslicer
def reportViolation(self, f):
# if the Violation is because we received an ABORT, then we know
# that the sender knows there was a problem, so don't respond.
if f.value.args[0] == "ABORT received":
return f
# if the Violation was raised after we know the reqID, we can send
# back an Error.
if self.stage > 0:
self.broker.callFailed(f, self.reqID)
return f # give up our sequence
def receiveChild(self, token, ready_deferred=None):
assert not isinstance(token, defer.Deferred)
assert ready_deferred is None
#print "CallUnslicer.receiveChild [s%d]" % self.stage, repr(token)
if self.stage == 0: # reqID
# we don't yet know which reqID to send any failure to
self.reqID = token
self.stage = 1
if self.reqID != 0:
assert self.reqID not in self.broker.activeLocalCalls
self.broker.activeLocalCalls[self.reqID] = self
return
if self.stage == 1: # objID
# this might raise an exception if objID is invalid
self.objID = token
self.obj = self.broker.getMyReferenceByCLID(token)
#iface = self.broker.getRemoteInterfaceByName(token)
if self.objID < 0:
self.interface = None
else:
self.interface = self.obj.getInterface()
self.stage = 2
return
if self.stage == 2: # methodname
# validate the methodname, get the schema. This may raise an
# exception for unknown methods
# must find the schema, using the interfaces
# TODO: getSchema should probably be in an adapter instead of in
# a pb.Referenceable base class. Old-style (unconstrained)
# flavors.Referenceable should be adapted to something which
# always returns None
# TODO: make this faster. A likely optimization is to take a
# tuple of components.getInterfaces(obj) and use it as a cache
# key. It would be even faster to use obj.__class__, but that
# would probably violate the expectation that instances can
# define their own __implements__ (independently from their
# class). If this expectation were to go away, a quick
# obj.__class__ -> RemoteReferenceSchema cache could be built.
self.stage = 3
if self.objID < 0:
# the target is a bound method, ignore the methodname
self.methodSchema = getattr(self.obj, "methodSchema", None)
self.methodname = None # TODO: give it something useful
if self.broker.requireSchema and not self.methodSchema:
why = "This broker does not accept unconstrained " + \
"method calls"
raise Violation(why)
return
self.methodname = token
if self.interface:
# they are calling an interface+method pair
ms = self.interface.get(self.methodname)
if not ms:
why = "method '%s' not defined in %s" % \
(self.methodname, self.interface.__remote_name__)
raise Violation(why)
self.methodSchema = ms
return
if self.stage == 3: # arguments
assert isinstance(token, ArgumentUnslicer)
self.allargs = token
# queue the message. It will not be executed until all the
# arguments are ready. The .args list and .kwargs dict may change
# before then.
self.stage = 4
return
def receiveClose(self):
if self.stage != 4:
raise BananaError("'call' sequence ended too early")
# time to create the InboundDelivery object so we can queue it
delivery = InboundDelivery(self.reqID, self.obj,
self.interface, self.methodname,
self.methodSchema,
self.allargs)
return delivery, None
def describe(self):
s = "<methodcall"
if self.stage == 0:
pass
if self.stage >= 1:
s += " reqID=%d" % self.reqID
if self.stage >= 2:
s += " obj=%s" % (self.obj,)
ifacename = "[none]"
if self.interface:
ifacename = self.interface.__remote_name__
s += " iface=%s" % ifacename
if self.stage >= 3:
s += " methodname=%s" % self.methodname
s += ">"
return s
class AnswerSlicer(slicer.ScopedSlicer):
opentype = ('answer',)
def __init__(self, reqID, results):
assert reqID != 0
slicer.ScopedSlicer.__init__(self, None)
self.reqID = reqID
self.results = results
def sliceBody(self, streamable, banana):
yield self.reqID
yield self.results
def describe(self):
return "<answer-%s>" % self.reqID
class AnswerUnslicer(slicer.ScopedUnslicer):
request = None
resultConstraint = None
haveResults = False
def checkToken(self, typebyte, size):
if self.request is None:
if typebyte != tokens.INT:
raise BananaError("request ID must be an INT")
elif not self.haveResults:
if self.resultConstraint:
try:
self.resultConstraint.checkToken(typebyte, size)
except Violation, v:
# improve the error message
if v.args:
# this += gives me a TypeError "object doesn't
# support item assignment", which confuses me
#v.args[0] += " in inbound method results"
why = v.args[0] + " in inbound method results"
v.args = why,
else:
v.args = ("in inbound method results",)
raise # this will errback the request
else:
raise BananaError("stop sending me stuff!")
def doOpen(self, opentype):
if self.resultConstraint:
self.resultConstraint.checkOpentype(opentype)
# TODO: improve the error message
unslicer = self.open(opentype)
if unslicer:
if self.resultConstraint:
unslicer.setConstraint(self.resultConstraint)
return unslicer
def receiveChild(self, token, ready_deferred=None):
assert not isinstance(token, defer.Deferred)
assert ready_deferred is None
if self.request == None:
reqID = token
# may raise Violation for bad reqIDs
self.request = self.broker.getRequest(reqID)
self.resultConstraint = self.request.constraint
else:
self.results = token
self.haveResults = True
def reportViolation(self, f):
# if the Violation was received after we got the reqID, we can tell
# the broker it was an error
if self.request != None:
self.request.fail(f)
return f # give up our sequence
def receiveClose(self):
self.request.complete(self.results)
return None, None
def describe(self):
if self.request:
return "Answer(req=%s)" % self.request.reqID
return "Answer(req=?)"
class ErrorSlicer(slicer.ScopedSlicer):
opentype = ('error',)
def __init__(self, reqID, f):
slicer.ScopedSlicer.__init__(self, None)
assert isinstance(f, failure.Failure)
self.reqID = reqID
self.f = f
def sliceBody(self, streamable, banana):
yield self.reqID
yield self.f
def describe(self):
return "<error-%s>" % self.reqID
class ErrorUnslicer(slicer.ScopedUnslicer):
request = None
fConstraint = FailureConstraint()
gotFailure = False
def checkToken(self, typebyte, size):
if self.request == None:
if typebyte != tokens.INT:
raise BananaError("request ID must be an INT")
elif not self.gotFailure:
self.fConstraint.checkToken(typebyte, size)
else:
raise BananaError("stop sending me stuff!")
def doOpen(self, opentype):
self.fConstraint.checkOpentype(opentype)
unslicer = self.open(opentype)
if unslicer:
unslicer.setConstraint(self.fConstraint)
return unslicer
def reportViolation(self, f):
# a failure while receiving the failure. A bit daft, really.
if self.request != None:
self.request.fail(f)
return f # give up our sequence
def receiveChild(self, token, ready_deferred=None):
assert not isinstance(token, defer.Deferred)
assert ready_deferred is None
if self.request == None:
reqID = token
# may raise BananaError for bad reqIDs
self.request = self.broker.getRequest(reqID)
else:
self.failure = token
self.gotFailure = True
def receiveClose(self):
self.request.fail(self.failure)
return None, None
def describe(self):
if self.request is None:
return "<error-?>"
return "<error-%s>" % self.request.reqID
# failures are sent as Copyables
class FailureSlicer(slicer.BaseSlicer):
slices = failure.Failure
classname = "twisted.python.failure.Failure"
def slice(self, streamable, banana):
self.streamable = streamable
yield 'copyable'
yield self.classname
state = self.getStateToCopy(self.obj, banana)
for k,v in state.iteritems():
yield k
yield v
def describe(self):
return "<%s>" % self.classname
def getStateToCopy(self, obj, broker):
#state = obj.__dict__.copy()
#state['tb'] = None
#state['frames'] = []
#state['stack'] = []
state = {}
if isinstance(obj.value, failure.Failure):
# TODO: how can this happen? I got rid of failure2Copyable, so
# if this case is possible, something needs to replace it
raise RuntimeError("not implemented yet")
#state['value'] = failure2Copyable(obj.value, banana.unsafeTracebacks)
else:
state['value'] = str(obj.value) # Exception instance
state['type'] = reflect.qual(obj.type) # Exception class
if broker.unsafeTracebacks:
io = StringIO()
obj.printTraceback(io)
state['traceback'] = io.getvalue()
# TODO: provide something with globals and locals and HTML and
# all that cool stuff
else:
state['traceback'] = 'Traceback unavailable\n'
if len(state['traceback']) > 1900:
state['traceback'] = (state['traceback'][:1900] +
"\n\n-- TRACEBACK TRUNCATED --\n")
state['parents'] = obj.parents
return state
class CopiedFailure(failure.Failure, copyable.RemoteCopyOldStyle):
# this is a RemoteCopyOldStyle because you can't raise new-style
# instances as exceptions.
"""I am a shadow of some remote Failure instance. I contain less
information than the original did.
You can still extract a (brief) printable traceback from me. My .parents
attribute is a list of strings describing the class of the exception
that I contain, just like the real Failure had, so my trap() and check()
methods work fine. My .type and .value attributes are string
representations of the original exception class and exception instance,
respectively. The most significant effect is that you cannot access
f.value.args, and should instead just use f.value .
My .frames and .stack attributes are empty, although this may change in
the future (and with the cooperation of the sender).
"""
nonCyclic = True
stateSchema = FailureConstraint()
def __init__(self):
copyable.RemoteCopyOldStyle.__init__(self)
def setCopyableState(self, state):
#self.__dict__.update(state)
self.__dict__ = state
# state includes: type, value, traceback, parents
#self.type = state['type']
#self.value = state['value']
#self.traceback = state['traceback']
#self.parents = state['parents']
self.tb = None
self.frames = []
self.stack = []
def __str__(self):
return "[CopiedFailure instance: %s]" % self.getBriefTraceback()
pickled = 1
def printTraceback(self, file=None, elideFrameworkCode=0,
detail='default'):
if file is None: file = log.logerr
file.write("Traceback from remote host -- ")
file.write(self.traceback)
copyable.registerRemoteCopy(FailureSlicer.classname, CopiedFailure)

View File

@ -0,0 +1,354 @@
# This provides a base for the various Constraint subclasses to use. Those
# Constraint subclasses live next to the slicers. It also contains
# Constraints for primitive types (int, str).
# This imports foolscap.tokens, but no other Foolscap modules.
import re
from zope.interface import implements, Interface
from foolscap.tokens import Violation, BananaError, SIZE_LIMIT, \
STRING, LIST, INT, NEG, LONGINT, LONGNEG, VOCAB, FLOAT, OPEN, \
tokenNames
everythingTaster = {
# he likes everything
STRING: SIZE_LIMIT,
LIST: None,
INT: None,
NEG: None,
LONGINT: SIZE_LIMIT,
LONGNEG: SIZE_LIMIT,
VOCAB: None,
FLOAT: None,
OPEN: None,
}
openTaster = {
OPEN: None,
}
nothingTaster = {}
class UnboundedSchema(Exception):
pass
class IConstraint(Interface):
pass
class IRemoteMethodConstraint(IConstraint):
def getPositionalArgConstraint(argnum):
"""Return the constraint for posargs[argnum]. This is called on
inbound methods when receiving positional arguments. This returns a
tuple of (accept, constraint), where accept=False means the argument
should be rejected immediately, regardless of what type it might be."""
def getKeywordArgConstraint(argname, num_posargs=0, previous_kwargs=[]):
"""Return the constraint for kwargs[argname]. The other arguments are
used to handle mixed positional and keyword arguments. Returns a
tuple of (accept, constraint)."""
def checkAllArgs(args, kwargs, inbound):
"""Submit all argument values for checking. When inbound=True, this
is called after the arguments have been deserialized, but before the
method is invoked. When inbound=False, this is called just inside
callRemote(), as soon as the target object (and hence the remote
method constraint) is located.
This should either raise Violation or return None."""
pass
def getResponseConstraint():
"""Return an IConstraint-providing object to enforce the response
constraint. This is called on outbound method calls so that when the
response starts to come back, we can start enforcing the appropriate
constraint right away."""
def checkResults(results, inbound):
"""Inspect the results of invoking a method call. inbound=False is
used on the side that hosts the Referenceable, just after the target
method has provided a value. inbound=True is used on the
RemoteReference side, just after it has finished deserializing the
response.
This should either raise Violation or return None."""
class Constraint:
"""
Each __schema__ attribute is turned into an instance of this class, and
is eventually given to the unserializer (the 'Unslicer') to enforce as
the tokens are arriving off the wire.
"""
implements(IConstraint)
taster = everythingTaster
"""the Taster is a dict that specifies which basic token types are
accepted. The keys are typebytes like INT and STRING, while the
values are size limits: the body portion of the token must not be
longer than LIMIT bytes.
"""
strictTaster = False
"""If strictTaster is True, taste violations are raised as BananaErrors
(indicating a protocol error) rather than a mere Violation.
"""
opentypes = None
"""opentypes is a list of currently acceptable OPEN token types. None
indicates that all types are accepted. An empty list indicates that no
OPEN tokens are accepted.
"""
name = None
"""Used to describe the Constraint in a Violation error message"""
def checkToken(self, typebyte, size):
"""Check the token type. Raise an exception if it is not accepted
right now, or if the body-length limit is exceeded."""
limit = self.taster.get(typebyte, "not in list")
if limit == "not in list":
if self.strictTaster:
raise BananaError("invalid token type")
else:
raise Violation("%s token rejected by %s" % \
(tokenNames[typebyte], self.name))
if limit and size > limit:
raise Violation("token too large: %d>%d" % (size, limit))
def setNumberTaster(self, maxValue):
self.taster = {INT: None,
NEG: None,
LONGINT: None, # TODO
LONGNEG: None,
FLOAT: None,
}
def checkOpentype(self, opentype):
"""Check the OPEN type (the tuple of Index Tokens). Raise an
exception if it is not accepted.
"""
if self.opentypes == None:
return
for o in self.opentypes:
if len(o) == len(opentype):
if o == opentype:
return
if len(o) > len(opentype):
# we might have a partial match: they haven't flunked yet
if opentype == o[:len(opentype)]:
return # still in the running
print "opentype %s, self.opentypes %s" % (opentype, self.opentypes)
raise Violation, "unacceptable OPEN type '%s'" % (opentype,)
def checkObject(self, obj, inbound):
"""Validate an existing object. Usually objects are validated as
their tokens come off the wire, but pre-existing objects may be
added to containers if a REFERENCE token arrives which points to
them. The older objects were were validated as they arrived (by a
different schema), but now they must be re-validated by the new
schema.
A more naive form of validation would just accept the entire object
tree into memory and then run checkObject() on the result. This
validation is too late: it is vulnerable to both DoS and
made-you-run-code attacks.
If inbound=True, this object is arriving over the wire. If
inbound=False, this is being called to validate an existing object
before it is sent over the wire. This is done as a courtesy to the
remote end, and to improve debuggability.
Most constraints can use the same checker for both inbound and
outbound objects.
"""
# this default form passes everything
return
def maxSize(self, seen=None):
"""
I help a caller determine how much memory could be consumed by the
input stream while my constraint is in effect.
My constraint will be enforced against the bytes that arrive over
the wire. Eventually I will either accept the incoming bytes and my
Unslicer will provide an object to its parent (including any
subobjects), or I will raise a Violation exception which will kick
my Unslicer into 'discard' mode.
I define maxSizeAccept as the maximum number of bytes that will be
received before the stream is accepted as valid. maxSizeReject is
the maximum that will be received before a Violation is raised. The
max of the two provides an upper bound on single objects. For
container objects, the upper bound is probably (n-1)*accept +
reject, because there can only be one outstanding
about-to-be-rejected object at any time.
I return (maxSizeAccept, maxSizeReject).
I raise an UnboundedSchema exception if there is no bound.
"""
raise UnboundedSchema
def maxDepth(self):
"""I return the greatest number Slicer objects that might exist on
the SlicerStack (or Unslicers on the UnslicerStack) while processing
an object which conforms to this constraint. This is effectively the
maximum depth of the object tree. I raise UnboundedSchema if there is
no bound.
"""
raise UnboundedSchema
COUNTERBYTES = 64 # max size of opencount
def OPENBYTES(self, dummy):
# an OPEN,type,CLOSE sequence could consume:
# 64 (header)
# 1 (OPEN)
# 64 (header)
# 1 (STRING)
# 1000 (value)
# or
# 64 (header)
# 1 (VOCAB)
# 64 (header)
# 1 (CLOSE)
# for a total of 65+1065+65 = 1195
return self.COUNTERBYTES+1 + 64+1+1000 + self.COUNTERBYTES+1
class OpenerConstraint(Constraint):
taster = openTaster
class Any(Constraint):
pass # accept everything
# constraints which describe individual banana tokens
class StringConstraint(Constraint):
opentypes = [] # redundant, as taster doesn't accept OPEN
name = "StringConstraint"
def __init__(self, maxLength=1000, minLength=0, regexp=None):
self.maxLength = maxLength
self.minLength = minLength
# regexp can either be a string or a compiled SRE_Match object..
# re.compile appears to notice SRE_Match objects and pass them
# through unchanged.
self.regexp = None
if regexp:
self.regexp = re.compile(regexp)
self.taster = {STRING: self.maxLength,
VOCAB: None}
def checkObject(self, obj, inbound):
if not isinstance(obj, (str, unicode)):
raise Violation("not a String")
if self.maxLength != None and len(obj) > self.maxLength:
raise Violation("string too long (%d > %d)" %
(len(obj), self.maxLength))
if len(obj) < self.minLength:
raise Violation("string too short (%d < %d)" %
(len(obj), self.minLength))
if self.regexp:
if not self.regexp.search(obj):
raise Violation("regexp failed to match")
def maxSize(self, seen=None):
if self.maxLength == None:
raise UnboundedSchema
return 64+1+self.maxLength
def maxDepth(self, seen=None):
return 1
class IntegerConstraint(Constraint):
opentypes = [] # redundant
# taster set in __init__
name = "IntegerConstraint"
def __init__(self, maxBytes=-1):
# -1 means s_int32_t: INT/NEG instead of INT/NEG/LONGINT/LONGNEG
# None means unlimited
assert maxBytes == -1 or maxBytes == None or maxBytes >= 4
self.maxBytes = maxBytes
self.taster = {INT: None, NEG: None}
if maxBytes != -1:
self.taster[LONGINT] = maxBytes
self.taster[LONGNEG] = maxBytes
def checkObject(self, obj, inbound):
if not isinstance(obj, (int, long)):
raise Violation("not a number")
if self.maxBytes == -1:
if obj >= 2**31 or obj < -2**31:
raise Violation("number too large")
elif self.maxBytes != None:
if abs(obj) >= 2**(8*self.maxBytes):
raise Violation("number too large")
def maxSize(self, seen=None):
if self.maxBytes == None:
raise UnboundedSchema
if self.maxBytes == -1:
return 64+1
return 64+1+self.maxBytes
def maxDepth(self, seen=None):
return 1
class NumberConstraint(IntegerConstraint):
name = "NumberConstraint"
def __init__(self, maxBytes=1024):
assert maxBytes != -1 # not valid here
IntegerConstraint.__init__(self, maxBytes)
self.taster[FLOAT] = None
def checkObject(self, obj, inbound):
if isinstance(obj, float):
return
IntegerConstraint.checkObject(self, obj, inbound)
def maxSize(self, seen=None):
# floats are packed into 8 bytes, so the shortest FLOAT token is
# 64+1+8
intsize = IntegerConstraint.maxSize(self, seen)
return max(64+1+8, intsize)
def maxDepth(self, seen=None):
return 1
#TODO
class Shared(Constraint):
name = "Shared"
def __init__(self, constraint, refLimit=None):
self.constraint = IConstraint(constraint)
self.refLimit = refLimit
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return self.constraint.maxSize(seen)
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return self.constraint.maxDepth(seen)
#TODO: might be better implemented with a .optional flag
class Optional(Constraint):
name = "Optional"
def __init__(self, constraint, default):
self.constraint = IConstraint(constraint)
self.default = default
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return self.constraint.maxSize(seen)
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return self.constraint.maxDepth(seen)

View File

@ -0,0 +1,434 @@
# -*- test-case-name: foolscap.test.test_copyable -*-
# this module is responsible for all copy-by-value objects
from zope.interface import interface, implements
from twisted.python import reflect, log
from twisted.python.components import registerAdapter
from twisted.internet import defer
import slicer, tokens
from tokens import BananaError, Violation
from foolscap.constraint import OpenerConstraint, IConstraint, \
StringConstraint, UnboundedSchema, Optional
Interface = interface.Interface
############################################################
# the first half of this file is sending/serialization
class ICopyable(Interface):
"""I represent an object which is passed-by-value across PB connections.
"""
def getTypeToCopy():
"""Return a string which names the class. This string must match the
one that gets registered at the receiving end. This is typically a
URL of some sort, in a namespace which you control."""
def getStateToCopy():
"""Return a state dictionary (with plain-string keys) which will be
serialized and sent to the remote end. This state object will be
given to the receiving object's setCopyableState method."""
class Copyable(object):
implements(ICopyable)
# you *must* set 'typeToCopy'
def getTypeToCopy(self):
try:
copytype = self.typeToCopy
except AttributeError:
raise RuntimeError("Copyable subclasses must specify 'typeToCopy'")
return copytype
def getStateToCopy(self):
return self.__dict__
class CopyableSlicer(slicer.BaseSlicer):
"""I handle ICopyable objects (things which are copied by value)."""
def slice(self, streamable, banana):
self.streamable = streamable
yield 'copyable'
copytype = self.obj.getTypeToCopy()
assert isinstance(copytype, str)
yield copytype
state = self.obj.getStateToCopy()
for k,v in state.iteritems():
yield k
yield v
def describe(self):
return "<%s>" % self.obj.getTypeToCopy()
registerAdapter(CopyableSlicer, ICopyable, tokens.ISlicer)
class Copyable2(slicer.BaseSlicer):
# I am my own Slicer. This has more methods than you'd usually want in a
# base class, but if you can't register an Adapter for a whole class
# hierarchy then you may have to use it.
def getTypeToCopy(self):
return reflect.qual(self.__class__)
def getStateToCopy(self):
return self.__dict__
def slice(self, streamable, banana):
self.streamable = streamable
yield 'instance'
yield self.getTypeToCopy()
yield self.getStateToCopy()
def describe(self):
return "<%s>" % self.getTypeToCopy()
#registerRemoteCopy(typename, factory)
#registerUnslicer(typename, factory)
def registerCopier(klass, copier):
"""This is a shortcut for arranging to serialize third-party clases.
'copier' must be a callable which accepts an instance of the class you
want to serialize, and returns a tuple of (typename, state_dictionary).
If it returns a typename of None, the original class's fully-qualified
classname is used.
"""
klassname = reflect.qual(klass)
class _CopierAdapter:
implements(ICopyable)
def __init__(self, original):
self.nameToCopy, self.state = copier(original)
if self.nameToCopy is None:
self.nameToCopy = klassname
def getTypeToCopy(self):
return self.nameToCopy
def getStateToCopy(self):
return self.state
registerAdapter(_CopierAdapter, klass, ICopyable)
############################################################
# beyond here is the receiving/deserialization side
class RemoteCopyUnslicer(slicer.BaseUnslicer):
attrname = None
attrConstraint = None
def __init__(self, factory, stateSchema):
self.factory = factory
self.schema = stateSchema
def start(self, count):
self.d = {}
self.count = count
self.deferred = defer.Deferred()
self.protocol.setObject(count, self.deferred)
def checkToken(self, typebyte, size):
if self.attrname == None:
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("RemoteCopyUnslicer keys must be STRINGs")
else:
if self.attrConstraint:
self.attrConstraint.checkToken(typebyte, size)
def doOpen(self, opentype):
if self.attrConstraint:
self.attrConstraint.checkOpentype(opentype)
unslicer = self.open(opentype)
if unslicer:
if self.attrConstraint:
unslicer.setConstraint(self.attrConstraint)
return unslicer
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, defer.Deferred)
assert ready_deferred is None
if self.attrname == None:
attrname = obj
if self.d.has_key(attrname):
raise BananaError("duplicate attribute name '%s'" % attrname)
s = self.schema
if s:
accept, self.attrConstraint = s.getAttrConstraint(attrname)
assert accept
self.attrname = attrname
else:
if isinstance(obj, defer.Deferred):
# TODO: this is an artificial restriction, and it might
# be possible to remove it, but I need to think through
# it carefully first
raise BananaError("unreferenceable object in attribute")
self.setAttribute(self.attrname, obj)
self.attrname = None
self.attrConstraint = None
def setAttribute(self, name, value):
self.d[name] = value
def receiveClose(self):
try:
obj = self.factory(self.d)
except:
log.msg("%s.receiveClose: problem in factory %s" %
(self.__class__.__name__, self.factory))
log.err()
raise
self.protocol.setObject(self.count, obj)
self.deferred.callback(obj)
return obj, None
def describe(self):
if self.classname == None:
return "<??>"
me = "<%s>" % self.classname
if self.attrname is None:
return "%s.attrname??" % me
else:
return "%s.%s" % (me, self.attrname)
class NonCyclicRemoteCopyUnslicer(RemoteCopyUnslicer):
# The Deferred used in RemoteCopyUnslicer (used in case the RemoteCopy
# is participating in a reference cycle, say 'obj.foo = obj') makes it
# unsuitable for holding Failures (which cannot be passed through
# Deferred.callback). Use this class for Failures. It cannot handle
# reference cycles (they will cause a KeyError when the reference is
# followed).
def start(self, count):
self.d = {}
self.count = count
self.gettingAttrname = True
def receiveClose(self):
obj = self.factory(self.d)
return obj, None
class IRemoteCopy(Interface):
"""This interface defines what a RemoteCopy class must do. RemoteCopy
subclasses are used as factories to create objects that correspond to
Copyables sent over the wire.
Note that the constructor of an IRemoteCopy class will be called without
any arguments.
"""
def setCopyableState(statedict):
"""I accept an attribute dictionary name/value pairs and use it to
set my internal state.
Some of the values may be Deferreds, which are placeholders for the
as-yet-unreferenceable object which will eventually go there. If you
receive a Deferred, you are responsible for adding a callback to
update the attribute when it fires. [note:
RemoteCopyUnslicer.receiveChild currently has a restriction which
prevents this from happening, but that may go away in the future]
Some of the objects referenced by the attribute values may have
Deferreds in them (e.g. containers which reference recursive tuples).
Such containers are responsible for updating their own state when
those Deferreds fire, but until that point their state is still
subject to change. Therefore you must be careful about how much state
inspection you perform within this method."""
stateSchema = interface.Attribute("""I return an AttributeDictConstraint
object which places restrictions on incoming attribute values. These
restrictions are enforced as the tokens are received, before the state is
passed to setCopyableState.""")
# This maps typename to an Unslicer factory
CopyableRegistry = {}
def registerRemoteCopyUnslicerFactory(typename, unslicerfactory,
registry=None):
"""Tell PB that unslicerfactory can be used to handle Copyable objects
that provide a getTypeToCopy name of 'typename'. 'unslicerfactory' must
be a callable which takes no arguments and returns an object which
provides IUnslicer.
"""
assert callable(unslicerfactory)
# in addition, it must produce a tokens.IUnslicer . This is safe to do
# because Unslicers don't do anything significant when they are created.
test_unslicer = unslicerfactory()
assert tokens.IUnslicer.providedBy(test_unslicer)
assert type(typename) is str
if registry == None:
registry = CopyableRegistry
assert not registry.has_key(typename)
registry[typename] = unslicerfactory
# this keeps track of everything submitted to registerRemoteCopyFactory
debug_CopyableFactories = {}
def registerRemoteCopyFactory(typename, factory, stateSchema=None,
cyclic=True, registry=None):
"""Tell PB that 'factory' can be used to handle Copyable objects that
provide a getTypeToCopy name of 'typename'. 'factory' must be a callable
which accepts a state dictionary and returns a fully-formed instance.
'cyclic' is a boolean, which should be set to False to avoid using a
Deferred to provide the resulting RemoteCopy instance. This is needed to
deserialize Failures (or instances which inherit from one, like
CopiedFailure). In exchange for this, it cannot handle reference cycles.
"""
assert callable(factory)
debug_CopyableFactories[typename] = (factory, stateSchema, cyclic)
if cyclic:
def _RemoteCopyUnslicerFactory():
return RemoteCopyUnslicer(factory, stateSchema)
registerRemoteCopyUnslicerFactory(typename,
_RemoteCopyUnslicerFactory,
registry)
else:
def _RemoteCopyUnslicerFactoryNonCyclic():
return NonCyclicRemoteCopyUnslicer(factory, stateSchema)
registerRemoteCopyUnslicerFactory(typename,
_RemoteCopyUnslicerFactoryNonCyclic,
registry)
# this keeps track of everything submitted to registerRemoteCopy, which may
# be useful when you're wondering what's been auto-registered by the
# RemoteCopy metaclass magic
debug_RemoteCopyClasses = {}
def registerRemoteCopy(typename, remote_copy_class, registry=None):
"""Tell PB that remote_copy_class is the appropriate RemoteCopy class to
use when deserializing a Copyable sequence that is tagged with
'typename'. 'remote_copy_class' should be a RemoteCopy subclass or
implement the same interface, which means its constructor takes no
arguments and it has a setCopyableState(state) method to actually set the
instance's state after initialization. It must also have a nonCyclic
attribute.
"""
assert IRemoteCopy.implementedBy(remote_copy_class)
assert type(typename) is str
debug_RemoteCopyClasses[typename] = remote_copy_class
def _RemoteCopyFactory(state):
obj = remote_copy_class()
obj.setCopyableState(state)
return obj
registerRemoteCopyFactory(typename, _RemoteCopyFactory,
remote_copy_class.stateSchema,
not remote_copy_class.nonCyclic,
registry)
class RemoteCopyClass(type):
# auto-register RemoteCopy classes
def __init__(self, name, bases, dict):
type.__init__(self, name, bases, dict)
# don't try to register RemoteCopy itself
if name == "RemoteCopy" and _RemoteCopyBase in bases:
#print "not auto-registering %s %s" % (name, bases)
return
if "copytype" not in dict:
# TODO: provide a file/line-number for the class
raise RuntimeError("RemoteCopy subclass %s must specify 'copytype'"
% name)
copytype = dict['copytype']
if copytype:
registry = dict.get('copyableRegistry', None)
registerRemoteCopy(copytype, self, registry)
class _RemoteCopyBase:
implements(IRemoteCopy)
stateSchema = None # always a class attribute
nonCyclic = False
def __init__(self):
# the constructor will always be called without arguments
pass
def setCopyableState(self, state):
self.__dict__ = state
class RemoteCopyOldStyle(_RemoteCopyBase):
# note that these will not auto-register for you, because old-style
# classes do not do metaclass magic
copytype = None
class RemoteCopy(_RemoteCopyBase, object):
# Set 'copytype' to a unique string that is shared between the
# sender-side Copyable and the receiver-side RemoteCopy. This RemoteCopy
# subclass will be auto-registered using the 'copytype' name. Set
# copytype to None to disable auto-registration.
__metaclass__ = RemoteCopyClass
pass
class AttributeDictConstraint(OpenerConstraint):
"""This is a constraint for dictionaries that are used for attributes.
All keys are short strings, and each value has a separate constraint.
It could be used to describe instance state, but could also be used
to constraint arbitrary dictionaries with string keys.
Some special constraints are legal here: Optional.
"""
opentypes = [("attrdict",)]
name = "AttributeDictConstraint"
def __init__(self, *attrTuples, **kwargs):
self.ignoreUnknown = kwargs.get('ignoreUnknown', False)
self.acceptUnknown = kwargs.get('acceptUnknown', False)
self.keys = {}
for name, constraint in (list(attrTuples) +
kwargs.get('attributes', {}).items()):
assert name not in self.keys.keys()
self.keys[name] = IConstraint(constraint)
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
total = self.OPENBYTES("attributedict")
for name, constraint in self.keys.iteritems():
total += StringConstraint(len(name)).maxSize(seen)
total += constraint.maxSize(seen[:])
return total
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
# all the attribute names are 1-deep, so the min depth of the dict
# items is 1. The other "1" is for the AttributeDict container itself
return 1 + reduce(max, [c.maxDepth(seen[:])
for c in self.itervalues()], 1)
def getAttrConstraint(self, attrname):
c = self.keys.get(attrname)
if c:
if isinstance(c, Optional):
c = c.constraint
return (True, c)
# unknown attribute
if self.ignoreUnknown:
return (False, None)
if self.acceptUnknown:
return (True, None)
raise Violation("unknown attribute '%s'" % attrname)
def checkObject(self, obj, inbound):
if type(obj) != type({}):
raise Violation, "'%s' (%s) is not a Dictionary" % (obj,
type(obj))
allkeys = self.keys.keys()
for k in obj.keys():
try:
constraint = self.keys[k]
allkeys.remove(k)
except KeyError:
if not self.ignoreUnknown:
raise Violation, "key '%s' not in schema" % k
else:
# hmm. kind of a soft violation. allow it for now.
pass
else:
constraint.checkObject(obj[k], inbound)
for k in allkeys[:]:
if isinstance(self.keys[k], Optional):
allkeys.remove(k)
if allkeys:
raise Violation("object is missing required keys: %s" % \
",".join(allkeys))

View File

@ -0,0 +1,96 @@
# -*- test-case-name: foolscap.test.test_crypto -*-
available = False # hack to deal with half-broken imports in python <2.4
from OpenSSL import SSL
# we try to use ssl support classes from Twisted, if it is new enough. If
# not, we pull them from a local copy of sslverify. The funny '_ssl' import
# stuff is used to appease pyflakes, which otherwise complains that we're
# redefining an imported name.
from twisted.internet import ssl
if hasattr(ssl, "DistinguishedName"):
# Twisted-2.5 will contain these names
_ssl = ssl
CertificateOptions = ssl.CertificateOptions
else:
# but it hasn't been released yet (as of 16-Sep-2006). Without them, we
# cannot use any encrypted Tubs. We fall back to using a private copy of
# sslverify.py, copied from the Divmod tree.
import sslverify
_ssl = sslverify
from sslverify import OpenSSLCertificateOptions as CertificateOptions
DistinguishedName = _ssl.DistinguishedName
KeyPair = _ssl.KeyPair
Certificate = _ssl.Certificate
PrivateCertificate = _ssl.PrivateCertificate
from twisted.internet import error
if hasattr(error, "CertificateError"):
# Twisted-2.4 contains this, and it is used by twisted.internet.ssl
CertificateError = error.CertificateError
else:
class CertificateError(Exception):
"""
We did not find a certificate where we expected to find one.
"""
from foolscap import base32
peerFromTransport = Certificate.peerFromTransport
class MyOptions(CertificateOptions):
def _makeContext(self):
ctx = CertificateOptions._makeContext(self)
def alwaysValidate(conn, cert, errno, depth, preverify_ok):
# This function is called to validate the certificate received by
# the other end. OpenSSL calls it multiple times, each time it
# see something funny, to ask if it should proceed.
# We do not care about certificate authorities or revocation
# lists, we just want to know that the certificate has a valid
# signature and follow the chain back to one which is
# self-signed. The TubID will be the digest of one of these
# certificates. We need to protect against forged signatures, but
# not the usual SSL concerns about invalid CAs or revoked
# certificates.
# these constants are from openssl-0.9.7g/crypto/x509/x509_vfy.h
# and do not appear to be exposed by pyopenssl. Ick. TODO. We
# could just always return '1' here (ignoring all errors), but I
# think that would ignore forged signatures too, which would
# obviously be a security hole.
things_are_ok = (0, # X509_V_OK
9, # X509_V_ERR_CERT_NOT_YET_VALID
10, # X509_V_ERR_CERT_HAS_EXPIRED
18, # X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT
19, # X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN
)
if errno in things_are_ok:
return 1
# TODO: log the details of the error, because otherwise they get
# lost in the PyOpenSSL exception that will eventually be raised
# (possibly OpenSSL.SSL.Error: certificate verify failed)
# I think that X509_V_ERR_CERT_SIGNATURE_FAILURE is the most
# obvious sign of hostile attack.
return 0
# VERIFY_PEER means we ask the the other end for their certificate.
# not adding VERIFY_FAIL_IF_NO_PEER_CERT means it's ok if they don't
# give us one (i.e. if an anonymous client connects to an
# authenticated server). I don't know what VERIFY_CLIENT_ONCE does.
ctx.set_verify(SSL.VERIFY_PEER |
#SSL.VERIFY_FAIL_IF_NO_PEER_CERT |
SSL.VERIFY_CLIENT_ONCE,
alwaysValidate)
return ctx
def digest32(colondigest):
digest = "".join([chr(int(c,16)) for c in colondigest.split(":")])
digest = base32.encode(digest)
return digest
available = True

View File

@ -0,0 +1,209 @@
# miscellaneous helper classes for debugging and testing, not needed for
# normal use
from cStringIO import StringIO
from foolscap import banana, tokens, storage
Banana = banana.Banana
StorageBanana = storage.StorageBanana
from foolscap.slicers.root import RootSlicer
class LoggingBananaMixin:
# this variant prints a log of tokens sent and received, if you set the
# .doLog attribute to a string (like 'tx' or 'rx')
doLog = None
### send logging
def sendOpen(self):
if self.doLog: print "[%s] OPEN(%d)" % (self.doLog, self.openCount)
return Banana.sendOpen(self)
def sendToken(self, obj):
if self.doLog:
if type(obj) == str:
print "[%s] \"%s\"" % (self.doLog, obj)
elif type(obj) == int:
print "[%s] %s" % (self.doLog, obj)
else:
print "[%s] ?%s?" % (self.doLog, obj)
return Banana.sendToken(self, obj)
def sendClose(self, openID):
if self.doLog: print "[%s] CLOSE(%d)" % (self.doLog, openID)
return Banana.sendClose(self, openID)
def sendAbort(self, count):
if self.doLog: print "[%s] ABORT(%d)" % (self.doLog, count)
return Banana.sendAbort(self, count)
### receive logging
def rxStackSummary(self):
return ",".join([s.__class__.__name__ for s in self.receiveStack])
def handleOpen(self, openCount, indexToken):
if self.doLog:
stack = self.rxStackSummary()
print "[%s] got OPEN(%d,%s) %s" % \
(self.doLog, openCount, indexToken, stack)
return Banana.handleOpen(self, openCount, indexToken)
def handleToken(self, token, ready_deferred=None):
if self.doLog:
if type(token) == str:
s = '"%s"' % token
elif type(token) == int:
s = '%s' % token
elif isinstance(token, tokens.BananaFailure):
s = 'UF:%s' % (token,)
else:
s = '?%s?' % (token,)
print "[%s] got %s %s" % (self.doLog, s, self.rxStackSummary())
return Banana.handleToken(self, token, ready_deferred)
def handleClose(self, closeCount):
if self.doLog:
stack = self.rxStackSummary()
print "[%s] got CLOSE(%d): %s" % (self.doLog, closeCount, stack)
return Banana.handleClose(self, closeCount)
class LoggingBanana(LoggingBananaMixin, Banana):
pass
class LoggingStorageBanana(LoggingBananaMixin, StorageBanana):
pass
class TokenTransport:
disconnectReason = None
def loseConnection(self):
pass
class TokenBananaMixin:
# this class accumulates tokens instead of turning them into bytes
def __init__(self):
self.tokens = []
self.connectionMade()
self.transport = TokenTransport()
def sendOpen(self):
openID = self.openCount
self.openCount += 1
self.sendToken(("OPEN", openID))
return openID
def sendToken(self, token):
#print token
self.tokens.append(token)
def sendClose(self, openID):
self.sendToken(("CLOSE", openID))
def sendAbort(self, count=0):
self.sendToken(("ABORT",))
def sendError(self, msg):
#print "TokenBanana.sendError(%s)" % msg
pass
def getTokens(self):
self.produce()
assert(len(self.slicerStack) == 1)
assert(isinstance(self.slicerStack[0][0], RootSlicer))
return self.tokens
# TokenReceiveBanana
def processTokens(self, tokens):
self.object = None
for t in tokens:
self.receiveToken(t)
return self.object
def receiveToken(self, token):
# insert artificial tokens into receiveData. Once upon a time this
# worked by directly calling the commented-out functions, but schema
# checking and abandonUnslicer made that unfeasible.
#if self.debug:
# print "receiveToken(%s)" % (token,)
if type(token) == type(()):
if token[0] == "OPEN":
count = token[1]
assert count < 128
b = ( chr(count) + tokens.OPEN )
self.injectData(b)
#self.handleOpen(count, opentype)
elif token[0] == "CLOSE":
count = token[1]
assert count < 128
b = chr(count) + tokens.CLOSE
self.injectData(b)
#self.handleClose(count)
elif token[0] == "ABORT":
if len(token) == 2:
count = token[1]
else:
count = 0
assert count < 128
b = chr(count) + tokens.ABORT
self.injectData(b)
#self.handleAbort(count)
elif type(token) == int:
assert 0 <= token < 128
b = chr(token) + tokens.INT
self.injectData(b)
elif type(token) == str:
assert len(token) < 128
b = chr(len(token)) + tokens.STRING + token
self.injectData(b)
else:
raise NotImplementedError, "hey, this is just a quick hack"
def injectData(self, data):
if not self.transport.disconnectReason:
self.dataReceived(data)
def receivedObject(self, obj):
self.object = obj
def reportViolation(self, why):
self.violation = why
class TokenBanana(TokenBananaMixin, Banana):
def __init__(self):
Banana.__init__(self)
TokenBananaMixin.__init__(self)
def reportReceiveError(self, f):
Banana.reportReceiveError(self, f)
self.transport.disconnectReason = tokens.BananaFailure()
class TokenStorageBanana(TokenBananaMixin, StorageBanana):
def __init__(self):
StorageBanana.__init__(self)
TokenBananaMixin.__init__(self)
def reportReceiveError(self, f):
StorageBanana.reportReceiveError(self, f)
self.transport.disconnectReason = tokens.BananaFailure()
def encodeTokens(obj, debug=0):
b = TokenStorageBanana()
b.debug = debug
d = b.send(obj)
d.addCallback(lambda res: b.tokens)
return d
def decodeTokens(tokens, debug=0):
b = TokenStorageBanana()
b.debug = debug
obj = b.processTokens(tokens)
return obj
def encode(obj):
b = LoggingStorageBanana()
b.transport = StringIO()
b.send(obj)
return b.transport.getvalue()
def decode(string):
b = LoggingStorageBanana()
b.dataReceived(string)
return b.object

View File

@ -0,0 +1,78 @@
# -*- test-case-name: foolscap.test.test_eventual -*-
from twisted.internet import reactor, defer
from twisted.python import log
class _SimpleCallQueue:
# XXX TODO: merge epsilon.cooperator in, and make this more complete.
def __init__(self):
self._events = []
self._flushObservers = []
self._timer = None
def append(self, cb, args, kwargs):
self._events.append((cb, args, kwargs))
if not self._timer:
self._timer = reactor.callLater(0, self._turn)
def _turn(self):
self._timer = None
# flush all the messages that are currently in the queue. If anything
# gets added to the queue while we're doing this, those events will
# be put off until the next turn.
events, self._events = self._events, []
for cb, args, kwargs in events:
try:
cb(*args, **kwargs)
except:
log.err()
if self._events and not self._timer:
self._timer = reactor.callLater(0, self._turn)
if not self._events:
observers, self._flushObservers = self._flushObservers, []
for o in observers:
o.callback(None)
def flush(self):
"""Return a Deferred that will fire (with None) when the call queue
is completely empty."""
if not self._events:
return defer.succeed(None)
d = defer.Deferred()
self._flushObservers.append(d)
return d
_theSimpleQueue = _SimpleCallQueue()
def eventually(cb, *args, **kwargs):
"""This is the eventual-send operation, used as a plan-coordination
primitive. The callable will be invoked (with args and kwargs) in a later
reactor turn. Doing 'eventually(a); eventually(b)' guarantees that a will
be called before b.
Any exceptions that occur in the callable will be logged with log.err().
If you really want to ignore them, be sure to provide a callable that
catches those exceptions.
This function returns None. If you care to know when the callable was
run, be sure to provide a callable that notifies somebody.
"""
_theSimpleQueue.append(cb, args, kwargs)
def fireEventually(value=None):
"""This returns a Deferred which will fire in a later reactor turn, after
the current call stack has been completed, and after all other deferreds
previously scheduled with callEventually().
"""
d = defer.Deferred()
eventually(d.callback, value)
return d
def flushEventualQueue(_ignored=None):
"""This returns a Deferred which fires when the eventual-send queue is
finally empty. This is useful to wait upon as the last step of a Trial
test method.
"""
return _theSimpleQueue.flush()

View File

@ -0,0 +1,103 @@
from zope.interface import interface
Interface = interface.Interface
# TODO: move these here
from foolscap.tokens import ISlicer, IRootSlicer, IUnslicer
_ignored = [ISlicer, IRootSlicer, IUnslicer] # hush pyflakes
class DeadReferenceError(Exception):
"""The RemoteReference is dead, Jim."""
class IReferenceable(Interface):
"""This object is remotely referenceable. This means it is represented to
remote systems as an opaque identifier, and that round-trips preserve
identity.
"""
def processUniqueID():
"""Return a unique identifier (scoped to the process containing the
Referenceable). Most objects can just use C{id(self)}, but objects
which should be indistinguishable to a remote system may want
multiple objects to map to the same PUID."""
class IRemotelyCallable(Interface):
"""This object is remotely callable. This means it defines some remote_*
methods and may have a schema which describes how those methods may be
invoked.
"""
def getInterfaceNames():
"""Return a list of RemoteInterface names to which this object knows
how to respond."""
def doRemoteCall(methodname, args, kwargs):
"""Invoke the given remote method. This method may raise an
exception, return normally, or return a Deferred."""
class ITub(Interface):
"""This marks a Tub."""
class IRemoteReference(Interface):
"""This marks a RemoteReference."""
def notifyOnDisconnect(callback, *args, **kwargs):
"""Register a callback to run when we lose this connection.
The callback will be invoked with whatever extra arguments you
provide to this function. For example::
def my_callback(name, number):
print name, number+4
cookie = rref.notifyOnDisconnect(my_callback, 'bob', number=3)
This function returns an opaque cookie. If you want to cancel the
notification, pass this same cookie back to dontNotifyOnDisconnect::
rref.dontNotifyOnDisconnect(cookie)
Note that if the Tub is shutdown (via stopService), all
notifyOnDisconnect handlers are cancelled.
"""
def callRemote(name, *args, **kwargs):
"""Invoke a method on the remote object with which I am associated.
I always return a Deferred. This will fire with the results of the
method when and if the remote end finishes. It will errback if any of
the following things occur::
the arguments do not match the schema I believe is in use by the
far end (causes a Violation exception)
the connection to the far end has been lost (DeadReferenceError)
the arguments are not accepted by the schema in use by the far end
(Violation)
the method executed by the far end raises an exception (arbitrary)
the return value of the remote method is not accepted by the schema
in use by the far end (Violation)
the connection is lost before the response is returned
(ConnectionLost)
the return value is not accepted by the schema I believe is in use
by the far end (Violation)
"""
def callRemoteOnly(name, *args, **kwargs):
"""Invoke a method on the remote object with which I am associated.
This form is for one-way messages that do not require results or even
acknowledgement of completion. I do not wait for the method to finish
executing. The remote end will be instructed to not send any
response. There is no way to know whether the method was successfully
delivered or not.
I always return None.
"""

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,50 @@
# -*- test-case-name: foolscap.test_observer -*-
# many thanks to AllMyData for contributing the initial version of this code
from twisted.internet import defer
from foolscap import eventual
class OneShotObserverList:
"""A one-shot event distributor.
Subscribers can get a Deferred that will fire with the results of the
event once it finally occurs. The caller does not need to know whether
the event has happened yet or not: they get a Deferred in either case.
The Deferreds returned to subscribers are guaranteed to not fire in the
current reactor turn; instead, eventually() is used to fire them in a
later turn. Look at Mark Miller's 'Concurrency Among Strangers' paper on
erights.org for a description of why this property is useful.
I can only be fired once."""
def __init__(self):
self._fired = False
self._result = None
self._watchers = []
self.__repr__ = self._unfired_repr
def _unfired_repr(self):
return "<OneShotObserverList [%s]>" % (self._watchers, )
def _fired_repr(self):
return "<OneShotObserverList -> %s>" % (self._result, )
def whenFired(self):
if self._fired:
return eventual.fireEventually(self._result)
d = defer.Deferred()
self._watchers.append(d)
return d
def fire(self, result):
assert not self._fired
self._fired = True
self._result = result
for w in self._watchers:
eventual.eventually(w.callback, result)
del self._watchers
self.__repr__ = self._fired_repr

701
src/foolscap/foolscap/pb.py Normal file
View File

@ -0,0 +1,701 @@
# -*- test-case-name: foolscap.test.test_pb -*-
import os.path, weakref
from zope.interface import implements
from twisted.internet import defer, protocol
from twisted.application import service, strports
from foolscap import ipb, base32, negotiate, broker, observer
from foolscap.referenceable import SturdyRef
from foolscap.tokens import PBError, BananaError
from foolscap.reconnector import Reconnector
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
Listeners = []
class Listener(protocol.ServerFactory):
"""I am responsible for a single listening port, which may connect to
multiple Tubs. I have a strports-based Service, which I will attach as a
child of one of my Tubs. If that Tub disconnects, I will reparent the
Service to a remaining one.
Unauthenticated Tubs use a TubID of 'None'. There may be at most one such
Tub attached to any given Listener."""
# this also serves as the ServerFactory
def __init__(self, port, options={},
negotiationClass=negotiate.Negotiation):
"""
@type port: string
@param port: a L{twisted.application.strports} -style description.
"""
name, args, kw = strports.parse(port, None)
assert name in ("TCP", "UNIX") # TODO: IPv6
self.port = port
self.options = options
self.negotiationClass = negotiationClass
self.parentTub = None
self.tubs = {}
self.redirects = {}
self.s = strports.service(port, self)
Listeners.append(self)
def getPortnum(self):
"""When this Listener was created with a strport string of '0' or
'tcp:0' (meaning 'please allocate me something'), and if the Listener
is active (it is attached to a Tub which is in the 'running' state),
this method will return the port number that was allocated. This is
useful for the following pattern::
t = Tub()
l = t.listenOn('tcp:0')
t.setLocation('localhost:%d' % l.getPortnum())
"""
assert self.s.running
name, args, kw = strports.parse(self.port, None)
assert name in ("TCP",)
return self.s._port.getHost().port
def __repr__(self):
if self.tubs:
return "<Listener at 0x%x on %s with tubs %s>" % (
abs(id(self)),
self.port,
",".join([str(k) for k in self.tubs.keys()]))
return "<Listener at 0x%x on %s with no tubs>" % (abs(id(self)),
self.port)
def addTub(self, tub):
if tub.tubID in self.tubs:
if tub.tubID is None:
raise RuntimeError("This Listener (on %s) already has an "
"unauthenticated Tub, you cannot add a "
"second one" % self.port)
raise RuntimeError("This Listener (on %s) is already connected "
"to TubID '%s'" % (self.port, tub.tubID))
self.tubs[tub.tubID] = tub
if self.parentTub is None:
self.parentTub = tub
self.s.setServiceParent(self.parentTub)
def removeTub(self, tub):
# this might return a Deferred, since the removal might cause the
# Listener to shut down. It might also return None.
del self.tubs[tub.tubID]
if self.parentTub is tub:
# we need to switch to a new one
tubs = self.tubs.values()
if tubs:
self.parentTub = tubs[0]
# TODO: I want to do this without first doing
# disownServiceParent, so the port remains listening. Can we
# do this? It looks like setServiceParent does
# disownServiceParent first, so it may glitch.
self.s.setServiceParent(self.parentTub)
else:
# no more tubs, this Listener will go away now
d = self.s.disownServiceParent()
Listeners.remove(self)
return d
return None
def getService(self):
return self.s
def addRedirect(self, tubID, location):
assert tubID is not None # unauthenticated Tubs don't get redirects
self.redirects[tubID] = location
def removeRedirect(self, tubID):
del self.redirects[tubID]
def buildProtocol(self, addr):
"""Return a Broker attached to me (as the service provider).
"""
proto = self.negotiationClass()
proto.initServer(self)
proto.factory = self
return proto
def lookupTubID(self, tubID):
return self.tubs.get(tubID), self.redirects.get(tubID)
class Tub(service.MultiService):
"""I am a presence in the PB universe, also known as a Tub.
I am a Service (in the twisted.application.service.Service sense),
so you either need to call my startService() method before using me,
or setServiceParent() me to a running service.
This is the primary entry point for all PB-using applications, both
clients and servers.
I am known to the outside world by a base URL, which may include
authentication information (a yURL). This is my 'TubID'.
I contain Referenceables, and manage RemoteReferences to Referenceables
that live in other Tubs.
@param certData: if provided, use it as a certificate rather than
generating a new one. This is a PEM-encoded
private/public keypair, as returned by Tub.getCertData()
@param certFile: if provided, the Tub will store its certificate in
this file. If the file does not exist when the Tub is
created, the Tub will generate a new certificate and
store it here. If the file does exist, the certificate
will be loaded from this file.
The simplest way to use the Tub is to choose a long-term
location for the certificate, use certFile= to tell the
Tub about it, and then let the Tub manage its own
certificate.
You may provide certData, or certFile, (or neither), but
not both.
@param options: a dictionary of options that can influence connection
connection negotiation. Currently defined keys are:
- debug_slow: if True, wait half a second between
each negotiation response
@ivar brokers: maps TubIDs to L{Broker} instances
@ivar listeners: maps strport to TCPServer service
@ivar referenceToName: maps Referenceable to a name
@ivar nameToReference: maps name to Referenceable
"""
implements(ipb.ITub)
unsafeTracebacks = True # TODO: better way to enable this
logLocalFailures = False
logRemoteFailures = False
debugBanana = False
NAMEBITS = 160 # length of swissnumber for each reference
TUBIDBITS = 16 # length of non-crypto tubID
encrypted = True
negotiationClass = negotiate.Negotiation
brokerClass = broker.Broker
keepaliveTimeout = 4*60 # ping when connection has been idle this long
disconnectTimeout = None # disconnect after this much idle time
def __init__(self, certData=None, certFile=None, options={}):
service.MultiService.__init__(self)
self.setup(options)
if certFile:
self.setupEncryptionFile(certFile)
else:
self.setupEncryption(certData)
def setupEncryptionFile(self, certFile):
if os.path.exists(certFile):
certData = open(certFile, "rb").read()
self.setupEncryption(certData)
else:
self.setupEncryption(None)
f = open(certFile, "wb")
f.write(self.getCertData())
f.close()
def setupEncryption(self, certData):
if not crypto_available:
raise RuntimeError("crypto for PB is not available, "
"try importing foolscap.crypto and see "
"what happens")
if certData:
cert = crypto.PrivateCertificate.loadPEM(certData)
else:
cert = self.createCertificate()
self.myCertificate = cert
self.tubID = crypto.digest32(cert.digest("sha1"))
def setup(self, options):
self.options = options
self.listeners = []
self.locationHints = []
# local Referenceables
self.nameToReference = weakref.WeakValueDictionary()
self.referenceToName = weakref.WeakKeyDictionary()
self.strongReferences = []
# remote stuff. Most of these use a TubRef (or NoAuthTubRef) as a
# dictionary key
self.tubConnectors = {} # maps TubRef to a TubConnector
self.waitingForBrokers = {} # maps TubRef to list of Deferreds
self.brokers = {} # maps TubRef to a Broker that connects to them
self.unauthenticatedBrokers = [] # inbound Brokers without TubRefs
self.reconnectors = []
self._allBrokersAreDisconnected = observer.OneShotObserverList()
self._activeConnectors = []
self._allConnectorsAreFinished = observer.OneShotObserverList()
def setOption(self, name, value):
if name == "logLocalFailures":
# log (with log.err) any exceptions that occur during the
# execution of a local Referenceable's method, which is invoked
# on behalf of a remote caller. These exceptions are reported to
# the remote caller through their callRemote's Deferred as usual:
# this option enables logging on the callee's side (i.e. our
# side) as well.
#
# TODO: This does not yet include Violations which were raised
# because the inbound callRemote had arguments that didn't meet
# our specifications. But it should.
self.logLocalFailures = value
elif name == "logRemoteFailures":
# log (with log.err) any exceptions that occur during the
# execution of a remote Referenceabe's method, invoked on behalf
# of a local RemoteReference.callRemote(). These exceptions are
# reported to our local caller through the usual Deferred.errback
# mechanism: this enables logging on the caller's side (i.e. our
# side) as well.
self.logRemoteFailures = value
elif name == "keepaliveTimeout":
self.keepaliveTimeout = value
elif name == "disconnectTimeout":
self.disconnectTimeout = value
else:
raise KeyError("unknown option name '%s'" % name)
def createCertificate(self):
# this is copied from test_sslverify.py
dn = crypto.DistinguishedName(commonName="newpb_thingy")
keypair = crypto.KeyPair.generate()
req = keypair.certificateRequest(dn)
certData = keypair.signCertificateRequest(dn, req,
lambda dn: True,
132)
cert = keypair.newCertificate(certData)
#opts = cert.options()
# 'opts' can be given to reactor.listenSSL, or to transport.startTLS
return cert
def getCertData(self):
# the string returned by this method can be used as the certData=
# argument to create a new Tub with the same identity. TODO: actually
# test this, I don't know if dump/keypair.newCertificate is the right
# pair of methods.
return self.myCertificate.dumpPEM()
def setLocation(self, *hints):
"""Tell this service what its location is: a host:port description of
how to reach it from the outside world. You need to use this because
the Tub can't do it without help. If you do a
C{s.listenOn('tcp:1234')}, and the host is known as
C{foo.example.com}, then it would be appropriate to do::
s.setLocation('foo.example.com:1234')
You must set the location before you can register any references.
Encrypted Tubs can have multiple location hints, just provide
multiple arguments. Unauthenticated Tubs can only have one location."""
if not self.encrypted and len(hints) > 1:
raise PBError("Unauthenticated tubs may only have one "
"location hint")
self.locationHints = hints
def listenOn(self, what, options={}):
"""Start listening for connections.
@type what: string or Listener instance
@param what: a L{twisted.application.strports} -style description,
or a L{Listener} instance returned by a previous call to
listenOn.
@param options: a dictionary of options that can influence connection
negotiation before the target Tub has been determined
@return: The Listener object that was created. This can be used to
stop listening later on, to have another Tub listen on the same port,
and to figure out which port was allocated when you used a strports
specification of'tcp:0'. """
if type(what) is str:
l = Listener(what, options, self.negotiationClass)
else:
assert not options
l = what
assert l not in self.listeners
l.addTub(self)
self.listeners.append(l)
return l
def stopListeningOn(self, l):
# this returns a Deferred when the port is shut down
self.listeners.remove(l)
d = defer.maybeDeferred(l.removeTub, self)
return d
def getListeners(self):
"""Return the set of Listener objects that allow the outside world to
connect to this Tub."""
return self.listeners[:]
def clone(self):
"""Return a new Tub (with a different ID), listening on the same
ports as this one."""
if self.encrypted:
t = Tub()
else:
t = UnauthenticatedTub()
for l in self.listeners:
t.listenOn(l)
return t
def connectorStarted(self, c):
assert self.running
self._activeConnectors.append(c)
def connectorFinished(self, c):
self._activeConnectors.remove(c)
if not self.running and not self._activeConnectors:
self._allConnectorsAreFinished.fire(self)
def _tubsAreNotRestartable(self):
raise RuntimeError("Sorry, but Tubs cannot be restarted.")
def _tubHasBeenShutDown(self):
raise RuntimeError("Sorry, but this Tub has been shut down.")
def stopService(self):
# note that once you stopService a Tub, I cannot be restarted. (at
# least this code is not designed to make that possible.. it might be
# doable in the future).
self.startService = self._tubsAreNotRestartable
self.getReference = self._tubHasBeenShutDown
self.connectTo = self._tubHasBeenShutDown
dl = []
for rc in self.reconnectors:
rc.stopConnecting()
del self.reconnectors
for l in self.listeners:
# TODO: rethink this, what I want is for stopService to cause all
# Listeners to shut down, but I'm not sure this is the right way
# to do it.
d = l.removeTub(self)
if isinstance(d, defer.Deferred):
dl.append(d)
dl.append(service.MultiService.stopService(self))
if self._activeConnectors:
dl.append(self._allConnectorsAreFinished.whenFired())
for c in self._activeConnectors:
c.shutdown()
if self.brokers or self.unauthenticatedBrokers:
dl.append(self._allBrokersAreDisconnected.whenFired())
for b in self.brokers.values():
b.shutdown()
for b in self.unauthenticatedBrokers:
b.shutdown()
return defer.DeferredList(dl)
def generateSwissnumber(self, bits):
bytes = os.urandom(bits/8)
return base32.encode(bytes)
def buildURL(self, name):
if self.encrypted:
# TODO: IPv6 dotted-quad addresses have colons, but need to have
# host:port
hints = ",".join(self.locationHints)
return "pb://" + self.tubID + "@" + hints + "/" + name
return "pbu://" + self.locationHints[0] + "/" + name
def registerReference(self, ref, name=None):
"""Make a Referenceable available to the outside world. A URL is
returned which can be used to access this object. This registration
will remain in effect (and the Tub will retain a reference to the
object to keep it meaningful) until explicitly unregistered, or the
Tub is shut down.
@type name: string (optional)
@param name: if provided, the object will be registered with this
name. If not, a random (unguessable) string will be
used.
@rtype: string
@return: the URL which points to this object. This URL can be passed
to Tub.getReference() in any Tub on any host which can reach this
one.
"""
if not self.locationHints:
raise RuntimeError("you must setLocation() before "
"you can registerReference()")
name = self._assignName(ref, name)
assert name
if ref not in self.strongReferences:
self.strongReferences.append(ref)
return self.buildURL(name)
# this is called by either registerReference or by
# getOrCreateURLForReference
def _assignName(self, ref, preferred_name=None):
"""Make a Referenceable available to the outside world, but do not
retain a strong reference to it. If we must create a new name, use
preferred_name. If that is None, use a random unguessable name.
"""
if not self.locationHints:
# without a location, there is no point in giving it a name
return None
if self.referenceToName.has_key(ref):
return self.referenceToName[ref]
name = preferred_name
if not name:
name = self.generateSwissnumber(self.NAMEBITS)
self.referenceToName[ref] = name
self.nameToReference[name] = ref
return name
def getReferenceForName(self, name):
return self.nameToReference[name]
def getReferenceForURL(self, url):
# TODO: who should this be used by?
sturdy = SturdyRef(url)
assert sturdy.tubID == self.tubID
return self.getReferenceForName(sturdy.name)
def getOrCreateURLForReference(self, ref):
"""Return the global URL for the reference, if there is one, or None
if there is not."""
name = self._assignName(ref)
if name:
return self.buildURL(name)
return None
def revokeReference(self, ref):
# TODO
pass
def unregisterURL(self, url):
sturdy = SturdyRef(url)
name = sturdy.name
ref = self.nameToReference[name]
del self.nameToReference[name]
del self.referenceToName[ref]
self.revokeReference(ref)
def unregisterReference(self, ref):
name = self.referenceToName[ref]
url = self.buildURL(name)
sturdy = SturdyRef(url)
name = sturdy.name
del self.nameToReference[name]
del self.referenceToName[ref]
if ref in self.strongReferences:
self.strongReferences.remove(ref)
self.revokeReference(ref)
def getReference(self, sturdyOrURL):
"""Acquire a RemoteReference for the given SturdyRef/URL.
@return: a Deferred that fires with the RemoteReference
"""
if isinstance(sturdyOrURL, SturdyRef):
sturdy = sturdyOrURL
else:
sturdy = SturdyRef(sturdyOrURL)
# pb->pb: ok, requires crypto
# pbu->pb: ok, requires crypto
# pbu->pbu: ok
# pb->pbu: ok, requires crypto
if sturdy.encrypted and not crypto_available:
e = BananaError("crypto for PB is not available, "
"we cannot handle encrypted PB-URLs like %s"
% sturdy.getURL())
return defer.fail(e)
name = sturdy.name
d = self.getBrokerForTubRef(sturdy.getTubRef())
d.addCallback(lambda b: b.getYourReferenceByName(name))
return d
def connectTo(self, sturdyOrURL, cb, *args, **kwargs):
"""Establish (and maintain) a connection to a given PBURL.
I establish a connection to the PBURL and run a callback to inform
the caller about the newly-available RemoteReference. If the
connection is lost, I schedule a reconnection attempt for the near
future. If that one fails, I keep trying at longer and longer
intervals (exponential backoff).
I accept a callback which will be fired each time a connection
attempt succeeds. This callback is run with the new RemoteReference
and any additional args/kwargs provided to me. The callback should
then use rref.notifyOnDisconnect() to get a message when the
connection goes away. At some point after it goes away, the
Reconnector will reconnect.
I return a Reconnector object. When you no longer want to maintain
this connection, call the stopConnecting() method on the Reconnector.
I promise to not invoke your callback after you've called
stopConnecting(), even if there was already a connection attempt in
progress. If you had an active connection before calling
stopConnecting(), you will still have access to it, until it breaks
on its own. (I will not attempt to break existing connections, I will
merely stop trying to create new ones). All my Reconnector objects
will be shut down when the Tub is stopped.
Usage::
def _got_ref(rref, arg1, arg2):
rref.callRemote('hello again')
# etc
rc = tub.connectTo(_got_ref, 'arg1', 'arg2')
...
rc.stopConnecting() # later
"""
rc = Reconnector(self, sturdyOrURL, cb, *args, **kwargs)
self.reconnectors.append(rc)
return rc
# _removeReconnector is called by the Reconnector
def _removeReconnector(self, rc):
self.reconnectors.remove(rc)
def getBrokerForTubRef(self, tubref):
if tubref in self.brokers:
return defer.succeed(self.brokers[tubref])
if tubref.getTubID() == self.tubID:
b = self._createLoopbackBroker(tubref)
# _createLoopbackBroker will call brokerAttached, which will add
# it to self.brokers
return defer.succeed(b)
d = defer.Deferred()
if tubref not in self.waitingForBrokers:
self.waitingForBrokers[tubref] = []
self.waitingForBrokers[tubref].append(d)
if tubref not in self.tubConnectors:
# the TubConnector will call our brokerAttached when it finishes
# negotiation, which will fire waitingForBrokers[tubref].
c = negotiate.TubConnector(self, tubref)
self.tubConnectors[tubref] = c
c.connect()
return d
def _createLoopbackBroker(self, tubref):
t1,t2 = broker.LoopbackTransport(), broker.LoopbackTransport()
t1.setPeer(t2); t2.setPeer(t1)
n = negotiate.Negotiation()
params = n.loopbackDecision()
b1,b2 = self.brokerClass(params), self.brokerClass(params)
# we treat b1 as "our" broker, and b2 as "theirs", and we pretend
# that b2 has just connected to us. We keep track of b1, and b2 keeps
# track of us.
b1.setTub(self)
b2.setTub(self)
t1.protocol = b1; t2.protocol = b2
b1.makeConnection(t1); b2.makeConnection(t2)
self.brokerAttached(tubref, b1, False)
return b1
def connectionFailed(self, tubref, why):
# we previously initiated an outbound TubConnector to this tubref, but
# it was unable to establish a connection. 'why' is the most useful
# Failure that occurred (i.e. it is a NegotiationError if we made it
# that far, otherwise it's a ConnectionFailed).
if tubref in self.tubConnectors:
del self.tubConnectors[tubref]
if tubref in self.brokers:
# oh, but fortunately an inbound connection must have succeeded.
# Nevermind.
return
# inform hopeful Broker-waiters that they aren't getting one
if tubref in self.waitingForBrokers:
waiting = self.waitingForBrokers[tubref]
del self.waitingForBrokers[tubref]
for d in waiting:
d.errback(why)
def brokerAttached(self, tubref, broker, isClient):
if not tubref:
# this is an inbound connection from an unauthenticated Tub
assert not isClient
# we just track it so we can disconnect it later
self.unauthenticatedBrokers.append(broker)
return
if tubref in self.tubConnectors:
# we initiated an outbound connection to this tubref
if not isClient:
# however, the connection we got was from an inbound
# connection. The completed (inbound) connection wins, so
# abandon the outbound TubConnector
self.tubConnectors[tubref].shutdown()
# we don't need the TubConnector any more
del self.tubConnectors[tubref]
if tubref in self.brokers:
# oops, this shouldn't happen but it isn't fatal. Raise
# BananaError so the Negotiation will drop the connection
raise BananaError("unexpected duplicate connection")
self.brokers[tubref] = broker
# now inform everyone who's been waiting on it
if tubref in self.waitingForBrokers:
waiting = self.waitingForBrokers[tubref]
del self.waitingForBrokers[tubref]
for d in waiting:
d.callback(broker)
def brokerDetached(self, broker, why):
# the Broker will have already severed all active references
for tubref in self.brokers.keys():
if self.brokers[tubref] is broker:
del self.brokers[tubref]
if broker in self.unauthenticatedBrokers:
self.unauthenticatedBrokers.remove(broker)
# if the Tub has already shut down, we may need to notify observers
# who are waiting for all of our connections to finish shutting down
if (not self.running
and not self.brokers
and not self.unauthenticatedBrokers):
self._allBrokersAreDisconnected.fire(self)
class UnauthenticatedTub(Tub):
encrypted = False
"""
@type tubID: string
@ivar tubID: a global identifier for this Tub, possibly including
authentication information, hash of SSL certificate
"""
def __init__(self, tubID=None, options={}):
service.MultiService.__init__(self)
self.setup(options)
self.myCertificate = None
assert not tubID # not yet
self.tubID = tubID
def getRemoteURL_TCP(host, port, pathname, *interfaces):
url = "pb://%s:%d/%s" % (host, port, pathname)
if crypto_available:
s = Tub()
else:
s = UnauthenticatedTub()
d = s.getReference(url, interfaces)
return d

View File

@ -0,0 +1,283 @@
# -*- test-case-name: foolscap.test.test_promise -*-
from twisted.python import util
from twisted.python.failure import Failure
from twisted.internet import defer
from foolscap.eventual import eventually
id = util.unsignedID
EVENTUAL, CHAINED, NEAR, BROKEN = range(4)
class UsageError(Exception):
"""Raised when you do something inappropriate to a Promise."""
def _ignore(results):
pass
class Promise:
"""I am a promise of a future result. I am a lot like a Deferred, except
that my promised result is usually an instance. I make it possible to
schedule method invocations on this future instance, returning Promises
for the results.
Promises are always in one of three states: Eventual, Fulfilled, and
Broken. (see http://www.erights.org/elib/concurrency/refmech.html for a
pretty picture). They start as Eventual, meaning we do not yet know
whether they will resolve or not. In this state, method invocations are
queued. Eventually the Promise will be 'resolved' into either the
Fulfilled or the Broken state. Fulfilled means that the promise contains
a live object to which methods can be dispatched synchronously. Broken
promises are incapable of invoking methods: they all result in Failure.
Method invocation is always asynchronous: it always returns a Promise.
The only thing you can do with a promise 'p1' is to perform an
eventual-send on it, like so::
sendOnly(p1).foo(args) # ignores the result
p2 = send(p1).bar(args) # creates a Promise for the result
p2 = p1.bar(args) # same as send(p1).bar(args)
Or wait for it to resolve, using one of the following::
d = when(p); d.addCallback(cb) # provides a Deferred
p._then(cb, *args, **kwargs) # like when(p).addCallback(cb,*a,**kw)
p._except(cb, *args, **kwargs) # like when(p).addErrback(cb,*a,**kw)
The _then and _except forms return the same Promise. You can set up
chains of calls that will be invoked in the future, using a dataflow
style, like this::
p = getPromiseForServer()
d = p.getDatabase('db1')
r = d.getRecord(name)
def _print(record):
print 'the record says', record
def _oops(failure):
print 'something failed:', failure
r._then(_print)
r._except(_oops)
Or all collapsed in one sequence like::
getPromiseForServer().getDatabase('db1').getRecord(name)._then(_print)
The eventual-send will eventually invoke the method foo(args) on the
promise's resolution. This will return a new Promise for the results of
that method call.
"""
# all our internal methods are private, to avoid a confusing lack of an
# error message if someone tries to make a synchronous method call on us
# with a name that happens to match an internal one.
_state = EVENTUAL
_useDataflowStyle = True # enables p.foo(args)
def __init__(self):
self._watchers = []
self._pendingMethods = [] # list of (methname, args, kwargs, p)
# _then and _except are our only public methods. All other access is
# through normal (not underscore-prefixed) attribute names, which
# indicate names of methods on the target object that should be called
# later.
def _then(self, cb, *args, **kwargs):
d = self._wait_for_resolution()
d.addCallback(cb, *args, **kwargs)
d.addErrback(lambda ignore: None)
return self
def _except(self, cb, *args, **kwargs):
d = self._wait_for_resolution()
d.addErrback(cb, *args, **kwargs)
return self
# everything beyond here is private to this module
def __repr__(self):
return "<Promise %#x>" % id(self)
def __getattr__(self, name):
if not self._useDataflowStyle:
raise AttributeError("no such attribute %s" % name)
def newmethod(*args, **kwargs):
return self._send(name, args, kwargs)
return newmethod
# _send and _sendOnly are used by send() and sendOnly(). _send is also
# used by regular attribute access.
def _send(self, methname, args, kwargs):
"""Return a Promise (for the result of the call) when the call is
eventually made. The call is guaranteed to not fire in this turn."""
# this is called by send()
p, resolver = makePromise()
if self._state in (EVENTUAL, CHAINED):
self._pendingMethods.append((methname, args, kwargs, resolver))
else:
eventually(self._deliver, methname, args, kwargs, resolver)
return p
def _sendOnly(self, methname, args, kwargs):
"""Send a message like _send, but discard the result."""
# this is called by sendOnly()
if self._state in (EVENTUAL, CHAINED):
self._pendingMethods.append((methname, args, kwargs, _ignore))
else:
eventually(self._deliver, methname, args, kwargs, _ignore)
# _wait_for_resolution is used by when(), as well as _then and _except
def _wait_for_resolution(self):
"""Return a Deferred that will fire (with whatever was passed to
_resolve) when this Promise moves to a RESOLVED state (either NEAR or
BROKEN)."""
# this is called by when()
if self._state in (EVENTUAL, CHAINED):
d = defer.Deferred()
self._watchers.append(d)
return d
if self._state == NEAR:
return defer.succeed(self._target)
# self._state == BROKEN
return defer.fail(self._target)
# _resolve is our resolver method, and is handed out by makePromise()
def _resolve(self, target_or_failure):
"""Resolve this Promise to refer to the given target. If called with
a Failure, the Promise is now BROKEN. _resolve may only be called
once."""
# E splits this method into two pieces resolve(result) and
# smash(problem). It is easier for us to keep them in one piece,
# because d.addBoth(p._resolve) is convenient.
if self._state != EVENTUAL:
raise UsageError("Promises may not be resolved multiple times")
self._resolve2(target_or_failure)
# the remaining methods are internal, for use by this class only
def _resolve2(self, target_or_failure):
# we may be called with a Promise, an immediate value, or a Failure
if isinstance(target_or_failure, Promise):
self._state = CHAINED
when(target_or_failure).addBoth(self._resolve2)
return
if isinstance(target_or_failure, Failure):
self._break(target_or_failure)
return
self._target = target_or_failure
self._deliver_queued_messages()
self._state = NEAR
def _break(self, failure):
# TODO: think about what you do to break a resolved promise. Once the
# Promise is in the NEAR state, it can't be broken, but eventually
# we're going to have a FAR state, which *can* be broken.
"""Put this Promise in the BROKEN state."""
if not isinstance(failure, Failure):
raise UsageError("Promises must be broken with a Failure")
if self._state == BROKEN:
raise UsageError("Broken Promises may not be re-broken")
self._target = failure
if self._state in (EVENTUAL, CHAINED):
self._deliver_queued_messages()
self._state == BROKEN
def _invoke_method(self, name, args, kwargs):
if isinstance(self._target, Failure):
return self._target
method = getattr(self._target, name)
res = method(*args, **kwargs)
return res
def _deliverOneMethod(self, methname, args, kwargs):
method = getattr(self._target, methname)
return method(*args, **kwargs)
def _deliver(self, methname, args, kwargs, resolver):
# the resolver will be fired with both success and Failure
t = self._target
if isinstance(t, Promise):
resolver(t._send(methname, args, kwargs))
elif isinstance(t, Failure):
resolver(t)
else:
d = defer.maybeDeferred(self._deliverOneMethod,
methname, args, kwargs)
d.addBoth(resolver)
def _deliver_queued_messages(self):
for (methname, args, kwargs, resolver) in self._pendingMethods:
eventually(self._deliver, methname, args, kwargs, resolver)
del self._pendingMethods
# Q: what are the partial-ordering semantics between queued messages
# and when() clauses that are waiting on this Promise to be resolved?
for d in self._watchers:
eventually(d.callback, self._target)
del self._watchers
def resolvedPromise(resolution):
p = Promise()
p._resolve(resolution)
return p
def makePromise():
p = Promise()
return p, p._resolve
class _MethodGetterWrapper:
def __init__(self, callback):
self.cb = [callback]
def __getattr__(self, name):
if name.startswith("_"):
raise AttributeError("method %s is probably private" % name)
cb = self.cb[0] # avoid bound-methodizing
def newmethod(*args, **kwargs):
return cb(name, args, kwargs)
return newmethod
def send(o):
"""Make an eventual-send call on object C{o}. Use this as follows:
p = send(o).foo(args)
C{o} can either be a Promise or an immediate value. The arguments can
either be promises or immediate values.
send() always returns a Promise, and the o.foo(args) method invocation
always takes place in a later reactor turn.
Many thanks to Mark Miller for suggesting this syntax to me.
"""
if isinstance(o, Promise):
return _MethodGetterWrapper(o._send)
p = resolvedPromise(o)
return _MethodGetterWrapper(p._send)
def sendOnly(o):
"""Make an eventual-send call on object C{o}, and ignore the results.
"""
if isinstance(o, Promise):
return _MethodGetterWrapper(o._sendOnly)
# this is a little bit heavyweight for a simple eventually(), but it
# makes the code simpler
p = resolvedPromise(o)
return _MethodGetterWrapper(p._sendOnly)
def when(p):
"""Turn a Promise into a Deferred that will fire with the enclosed object
when it is ready. Use this when you actually need to schedule something
to happen in a synchronous fashion. Most of the time, you can just invoke
methods on the Promise as if it were immediately available."""
assert isinstance(p, Promise)
return p._wait_for_resolution()

View File

@ -0,0 +1,118 @@
# -*- test-case-name: foolscap.test.test_reconnector -*-
import random
from twisted.internet import reactor
from twisted.python import log
from foolscap.tokens import NegotiationError, RemoteNegotiationError
class Reconnector:
"""Establish (and maintain) a connection to a given PBURL.
I establish a connection to the PBURL and run a callback to inform the
caller about the newly-available RemoteReference. If the connection is
lost, I schedule a reconnection attempt for the near future. If that one
fails, I keep trying at longer and longer intervals (exponential
backoff).
My constructor accepts a callback which will be fired each time a
connection attempt succeeds. This callback is run with the new
RemoteReference and any additional args/kwargs provided to me. The
callback should then use rref.notifyOnDisconnect() to get a message when
the connection goes away. At some point after it goes away, the
Reconnector will reconnect.
When you no longer want to maintain this connection, call my
stopConnecting() method. I promise to not invoke your callback after
you've called stopConnecting(), even if there was already a connection
attempt in progress. If you had an active connection before calling
stopConnecting(), you will still have access to it, until it breaks on
its own. (I will not attempt to break existing connections, I will merely
stop trying to create new ones).
"""
# adapted from twisted.internet.protocol.ReconnectingClientFactory
maxDelay = 3600
initialDelay = 1.0
# Note: These highly sensitive factors have been precisely measured by
# the National Institute of Science and Technology. Take extreme care
# in altering them, or you may damage your Internet!
factor = 2.7182818284590451 # (math.e)
# Phi = 1.6180339887498948 # (Phi is acceptable for use as a
# factor if e is too large for your application.)
jitter = 0.11962656492 # molar Planck constant times c, Joule meter/mole
verbose = False
def __init__(self, tub, url, cb, *args, **kwargs):
self._tub = tub
self._url = url
self._active = False
self._observer = (cb, args, kwargs)
self._delay = self.initialDelay
self._retries = 0
self._timer = None
self.startConnecting()
def startConnecting(self):
if self.verbose:
log.msg("Reconnector starting for %s" % self._url)
self._active = True
self._connect()
def stopConnecting(self):
if self.verbose:
log.msg("Reconnector stopping for %s" % self._url)
self._active = False
if self._timer:
self._timer.cancel()
self._timer = False
self._tub._removeReconnector(self)
def _connect(self):
d = self._tub.getReference(self._url)
d.addCallbacks(self._connected, self._failed)
def _connected(self, rref):
if not self._active:
return
rref.notifyOnDisconnect(self._disconnected)
cb, args, kwargs = self._observer
cb(rref, *args, **kwargs)
def _failed(self, f):
# I'd like to trap NegotiationError and basic TCP connection
# failures here, but not hide coding errors.
if self.verbose:
log.msg("Reconnector._failed: %s" % f)
# log certain unusual errors, even without self.verbose, to help
# people figure out why their reconnectors aren't connecting, since
# the usual getReference errback chain isn't active. This doesn't
# include ConnectError (which is a parent class of
# ConnectionRefusedError)
if f.check(RemoteNegotiationError, NegotiationError):
if not self.verbose:
log.msg("Reconnector._failed: %s" % f)
if not self._active:
return
self._delay = min(self._delay * self.factor, self.maxDelay)
if self.jitter:
self._delay = random.normalvariate(self._delay,
self._delay * self.jitter)
self._retry()
def _disconnected(self):
self._delay = self.initialDelay
self._retries = 0
self._retry()
def _retry(self):
if not self._active:
return
if self.verbose:
log.msg("Reconnector scheduling retry in %ds for %s" %
(self._delay, self._url))
self._timer = reactor.callLater(self._delay, self._timer_expired)
def _timer_expired(self):
self._timer = None
self._connect()

View File

@ -0,0 +1,799 @@
# -*- test-case-name: foolscap.test.test_sturdyref -*-
# this module is responsible for sending and receiving OnlyReferenceable and
# Referenceable (callable) objects. All details of actually invoking methods
# live in call.py
import weakref
from zope.interface import interface
from zope.interface import implements
from twisted.python.components import registerAdapter
Interface = interface.Interface
from twisted.internet import defer
from twisted.python import failure
from foolscap import ipb, slicer, tokens, call
BananaError = tokens.BananaError
Violation = tokens.Violation
from foolscap.constraint import IConstraint, StringConstraint
from foolscap.remoteinterface import getRemoteInterface, \
getRemoteInterfaceByName, RemoteInterfaceConstraint
from foolscap.schema import constraintMap
from foolscap.copyable import Copyable, RemoteCopy
from foolscap.eventual import eventually
class OnlyReferenceable(object):
implements(ipb.IReferenceable)
def processUniqueID(self):
return id(self)
class Referenceable(OnlyReferenceable):
implements(ipb.IReferenceable, ipb.IRemotelyCallable)
_interface = None
_interfaceName = None
# TODO: this code wants to be in an adapter, not a base class. Also, it
# would be nice to cache this across the class: if every instance has the
# same interfaces, they will have the same values of _interface and
# _interfaceName, and it feels silly to store this data separately for
# each instance. Perhaps we could compare the instance's interface list
# with that of the class and only recompute this stuff if they differ.
def getInterface(self):
if not self._interface:
self._interface = getRemoteInterface(self)
if self._interface:
self._interfaceName = self._interface.__remote_name__
else:
self._interfaceName = None
return self._interface
def getInterfaceName(self):
self.getInterface()
return self._interfaceName
def doRemoteCall(self, methodname, args, kwargs):
meth = getattr(self, "remote_%s" % methodname)
res = meth(*args, **kwargs)
return res
constraintMap[Referenceable] = RemoteInterfaceConstraint(None)
class ReferenceableTracker:
"""I hold the data which tracks a local Referenceable that is in used by
a remote Broker.
@ivar obj: the actual object
@ivar refcount: the number of times this reference has been sent to the
remote end, minus the number of DECREF messages which it
has sent back. When it goes to zero, the remote end has
forgotten the RemoteReference, and is prepared to forget
the RemoteReferenceData as soon as the DECREF message is
acknowledged.
@ivar clid: the connection-local ID used to represent this object on the
wire.
"""
def __init__(self, tub, obj, puid, clid):
self.tub = tub
self.obj = obj
self.clid = clid
self.puid = puid
self.refcount = 0
def send(self):
"""Increment the refcount.
@return: True if this is the first transmission of the reference.
"""
self.refcount += 1
if self.refcount == 1:
return True
def getURL(self):
if self.tub:
return self.tub.getOrCreateURLForReference(self.obj)
return None
def decref(self, count):
"""Call this in response to a DECREF message from the other end.
@return: True if the refcount went to zero, meaning this clid should
be retired.
"""
assert self.refcount >= count, "decref(%d) but refcount was %d" % (count, self.refcount)
self.refcount -= count
if self.refcount == 0:
return True
return False
# TODO: rather than subclassing Referenceable, ReferenceableSlicer should be
# registered to use for anything which provides any RemoteInterface
class ReferenceableSlicer(slicer.BaseSlicer):
"""I handle pb.Referenceable objects (things with remotely invokable
methods, which are copied by reference).
"""
opentype = ('my-reference',)
def sliceBody(self, streamable, broker):
puid = ipb.IReferenceable(self.obj).processUniqueID()
tracker = broker.getTrackerForMyReference(puid, self.obj)
yield tracker.clid
firstTime = tracker.send()
if firstTime:
# this is the first time the Referenceable has crossed this wire.
# In addition to the clid, send the interface name (if any), and
# any URL this reference might be known by
iname = ipb.IRemotelyCallable(self.obj).getInterfaceName()
if iname:
yield iname
else:
yield ""
url = tracker.getURL()
if url:
yield url
registerAdapter(ReferenceableSlicer, Referenceable, ipb.ISlicer)
class CallableSlicer(slicer.BaseSlicer):
"""Bound methods are serialized as my-reference sequences with negative
clid values."""
opentype = ('my-reference',)
def sliceBody(self, streamable, broker):
# TODO: consider this requirement, maybe based upon a Tub flag
# assert ipb.ISlicer(self.obj.im_self)
# or maybe even isinstance(self.obj.im_self, Referenceable)
puid = id(self.obj)
tracker = broker.getTrackerForMyCall(puid, self.obj)
yield tracker.clid
firstTime = tracker.send()
if firstTime:
# this is the first time the Call has crossed this wire. In
# addition to the clid, send the schema name and any URL this
# reference might be known by
schema = self.getSchema()
if schema:
yield schema
else:
yield ""
url = tracker.getURL()
if url:
yield url
def getSchema(self):
return None # TODO: not quite ready yet
# callables which are actually bound methods of a pb.Referenceable
# can use the schema from that
s = ipb.IReferenceable(self.obj.im_self, None)
if s:
return s.getSchemaForMethodNamed(self.obj.im_func.__name__)
# both bound methods and raw callables can also use a .schema
# attribute
return getattr(self.obj, "schema", None)
# The CallableSlicer is activated through PBRootSlicer.slicerTable, because a
# StorageBanana might want to stick with the old MethodSlicer/FunctionSlicer
# for these types
#registerAdapter(CallableSlicer, types.MethodType, ipb.ISlicer)
class ReferenceUnslicer(slicer.BaseUnslicer):
"""I turn an incoming 'my-reference' sequence into a RemoteReference or a
RemoteMethodReference."""
state = 0
clid = None
interfaceName = None
url = None
inameConstraint = StringConstraint(200) # TODO: only known RI names?
urlConstraint = StringConstraint(200)
def checkToken(self, typebyte, size):
if self.state == 0:
if typebyte not in (tokens.INT, tokens.NEG):
raise BananaError("reference ID must be an INT or NEG")
elif self.state == 1:
self.inameConstraint.checkToken(typebyte, size)
elif self.state == 2:
self.urlConstraint.checkToken(typebyte, size)
else:
raise Violation("too many parameters in my-reference")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, defer.Deferred)
assert ready_deferred is None
if self.state == 0:
self.clid = obj
self.state = 1
elif self.state == 1:
# must be the interface name
self.interfaceName = obj
if obj == "":
self.interfaceName = None
self.state = 2
elif self.state == 2:
# URL
self.url = obj
self.state = 3
else:
raise BananaError("Too many my-reference parameters")
def receiveClose(self):
if self.clid is None:
raise BananaError("sequence ended too early")
tracker = self.broker.getTrackerForYourReference(self.clid,
self.interfaceName,
self.url)
return tracker.getRef(), None
def describe(self):
if self.clid is None:
return "<ref-?>"
return "<ref-%s>" % self.clid
class RemoteReferenceTracker:
"""I hold the data necessary to locate (or create) a RemoteReference.
@ivar url: the target Referenceable's global URL
@ivar broker: the Broker which holds this RemoteReference
@ivar clid: for that Broker, the your-reference CLID for the
RemoteReference
@ivar interfaceName: the name of a RemoteInterface object that the
RemoteReference claims to implement
@ivar interface: our version of a RemoteInterface object that corresponds
to interfaceName
@ivar received_count: the number of times the remote end has send us this
object. We must send back decref() calls to match.
@ivar ref: a weakref to the RemoteReference itself
"""
def __init__(self, parent, clid, url, interfaceName):
self.broker = parent
self.clid = clid
# TODO: the remote end sends us a global URL, when really it should
# probably send us a per-Tub name, which can can then concatenate to
# their TubID if/when we pass it on to others. By accepting a full
# URL, we give them the ability to sort-of spoof others. We could
# check that url.startswith(broker.remoteTub.baseURL), but the Right
# Way is to just not have them send the base part in the first place.
# I haven't yet made this change because I'm not yet positive it
# would work.. how exactly does the base url get sent, anyway? What
# about Tubs visible through multiple names?
self.url = url
self.interfaceName = interfaceName
self.interface = getRemoteInterfaceByName(interfaceName)
self.received_count = 0
self.ref = None
def __repr__(self):
s = "<RemoteReferenceTracker(clid=%d,url=%s)>" % (self.clid, self.url)
return s
def getRef(self):
"""Return the actual RemoteReference that we hold, creating it if
necessary. This is called when we receive a my-reference sequence
from the remote end, so we must increment our received_count."""
# self.ref might be None (if we haven't created it yet), or it might
# be a dead weakref (if it has been released but our _handleRefLost
# hasn't fired yet). In either case we need to make a new
# RemoteReference.
if self.ref is None or self.ref() is None:
ref = RemoteReference(self)
self.ref = weakref.ref(ref, self._refLost)
self.received_count += 1
return self.ref()
def _refLost(self, wref):
# don't do anything right now, we could be in the middle of all sorts
# of weird code. both __del__ and weakref callbacks can fire at any
# time. Almost as bad as threads..
# instead, do stuff later.
eventually(self._handleRefLost)
def _handleRefLost(self):
if self.ref() is None:
count, self.received_count = self.received_count, 0
if count == 0:
return
self.broker.freeYourReference(self, count)
# otherwise our RemoteReference is actually still alive, resurrected
# between the call to _refLost and the eventual call to
# _handleRefLost. In this case, don't decref anything.
class RemoteReferenceOnly(object):
implements(ipb.IRemoteReference)
def __init__(self, tracker):
"""@param tracker: the RemoteReferenceTracker which points to us"""
self.tracker = tracker
def getSturdyRef(self):
return self.tracker.sturdy
def notifyOnDisconnect(self, callback, *args, **kwargs):
"""Register a callback to run when we lose this connection.
The callback will be invoked with whatever extra arguments you
provide to this function. For example::
def my_callback(name, number):
print name, number+4
cookie = rref.notifyOnDisconnect(my_callback, 'bob', number=3)
This function returns an opaque cookie. If you want to cancel the
notification, pass this same cookie back to dontNotifyOnDisconnect::
rref.dontNotifyOnDisconnect(cookie)
Note that if the Tub is shutdown (via stopService), all
notifyOnDisconnect handlers are cancelled.
"""
# return a cookie (really the (cb,args,kwargs) tuple) that they must
# use to deregister
marker = self.tracker.broker.notifyOnDisconnect(callback,
*args, **kwargs)
return marker
def dontNotifyOnDisconnect(self, marker):
self.tracker.broker.dontNotifyOnDisconnect(marker)
def __repr__(self):
r = "<%s at 0x%x" % (self.__class__.__name__, abs(id(self)))
if self.tracker.url:
r += " [%s]" % self.tracker.url
r += ">"
return r
class RemoteReference(RemoteReferenceOnly):
def callRemote(self, _name, *args, **kwargs):
# Note: for consistency, *all* failures are reported asynchronously.
return defer.maybeDeferred(self._callRemote, _name, *args, **kwargs)
def callRemoteOnly(self, _name, *args, **kwargs):
# the remote end will not send us a response. The only error cases
# are arguments that don't match the schema, or broken invariants. In
# particular, DeadReferenceError will be silently consumed.
d = defer.maybeDeferred(self._callRemote, _name, _callOnly=True,
*args, **kwargs)
return None
def _callRemote(self, _name, *args, **kwargs):
req = None
broker = self.tracker.broker
# remember that "none" is not a valid constraint, so we use it to
# mean "not set by the caller", which means we fall back to whatever
# the RemoteInterface says. Using None would mean an AnyConstraint,
# which is not the same thing.
methodConstraintOverride = kwargs.get("_methodConstraint", "none")
resultConstraint = kwargs.get("_resultConstraint", "none")
useSchema = kwargs.get("_useSchema", True)
callOnly = kwargs.get("_callOnly", False)
if "_methodConstraint" in kwargs:
del kwargs["_methodConstraint"]
if "_resultConstraint" in kwargs:
del kwargs["_resultConstraint"]
if "_useSchema" in kwargs:
del kwargs["_useSchema"]
if "_callOnly" in kwargs:
del kwargs["_callOnly"]
if callOnly:
if broker.disconnected:
# DeadReferenceError is silently consumed
return
reqID = 0
else:
# newRequestID() could fail with a DeadReferenceError
reqID = broker.newRequestID()
# in this clause, we validate the outbound arguments against our
# notion of what the other end will accept (the RemoteInterface)
req = call.PendingRequest(reqID, self)
# first, figure out which method they want to invoke
(interfaceName,
methodName,
methodSchema) = self._getMethodInfo(_name)
req.methodName = methodName # for debugging
if methodConstraintOverride != "none":
methodSchema = methodConstraintOverride
if useSchema and methodSchema:
# check args against the arg constraint. This could fail if
# any arguments are of the wrong type
try:
methodSchema.checkAllArgs(args, kwargs, False)
except Violation, v:
v.setLocation("%s.%s(%s)" % (interfaceName, methodName,
v.getLocation()))
raise
# the Interface gets to constraint the return value too, so
# make a note of it to use later
req.setConstraint(methodSchema.getResponseConstraint())
# if the caller specified a _resultConstraint, that overrides
# the schema's one
if resultConstraint != "none":
# overrides schema
req.setConstraint(IConstraint(resultConstraint))
clid = self.tracker.clid
slicer = call.CallSlicer(reqID, clid, methodName, args, kwargs)
# up to this point, we are not committed to sending anything to the
# far end. The various phases of commitment are:
# 1: once we tell our broker about the PendingRequest, we must
# promise to retire it eventually. Specifically, if we encounter an
# error before we give responsibility to the connection, we must
# retire it ourselves.
# 2: once we start sending the CallSlicer to the other end (in
# particular, once they receive the reqID), they might send us a
# response, so we must be prepared to handle that. Giving the
# PendingRequest to the broker arranges for this to happen.
# So all failures which occur before these commitment events are
# entirely local: stale broker, bad method name, bad arguments. If
# anything raises an exception before this point, the PendingRequest
# is abandoned, and our maybeDeferred wrapper returns a failing
# Deferred.
# commitment point 1. We assume that if this call raises an
# exception, the broker will be sure to not track the dead
# PendingRequest
if not callOnly:
broker.addRequest(req)
# if callOnly, the PendingRequest will never know about the
# broker, and will therefore never ask to be removed from it
# TODO: there is a decidability problem here: if the reqID made
# it through, the other end will send us an answer (possibly an
# error if the remaining slices were aborted). If not, we will
# not get an answer. To decide whether we should remove our
# broker.waitingForAnswers[] entry, we need to know how far the
# slicing process made it.
try:
# commitment point 2
d = broker.send(slicer)
# d will fire when the last argument has been serialized. It will
# errback if the arguments (or any of their children) could not
# be serialized. We need to catch this case and errback the
# caller.
# if we got here, we have been able to start serializing the
# arguments. If serialization fails, the PendingRequest needs to
# be flunked (because we aren't guaranteed that the far end will
# do it).
d.addErrback(req.fail)
except:
req.fail(failure.Failure())
# the remote end could send back an error response for many reasons:
# bad method name
# bad argument types (violated their schema)
# exception during method execution
# method result violated the results schema
# something else could occur to cause an errback:
# connection lost before response completely received
# exception during deserialization of the response
# [but only if it occurs after the reqID is received]
# method result violated our results schema
# if none of those occurred, the callback will be run
return req.deferred
def _getMethodInfo(self, name):
assert type(name) is str
interfaceName = None
methodName = name
methodSchema = None
iface = self.tracker.interface
if iface:
interfaceName = iface.__remote_name__
try:
methodSchema = iface[name]
except KeyError:
raise Violation("%s(%s) does not offer %s" % \
(interfaceName, self, name))
return interfaceName, methodName, methodSchema
class RemoteMethodReferenceTracker(RemoteReferenceTracker):
def getRef(self):
if self.ref is None:
ref = RemoteMethodReference(self)
self.ref = weakref.ref(ref, self._refLost)
self.received_count += 1
return self.ref()
class RemoteMethodReference(RemoteReference):
def callRemote(self, *args, **kwargs):
# TODO: I suspect it would safer to use something other than
# 'callRemote' here.
# TODO: this probably needs a very different implementation
# there is no schema support yet, so we can't convert positional args
# into keyword args
assert args == ()
return RemoteReference.callRemote(self, "", *args, **kwargs)
def _getMethodInfo(self, name):
interfaceName = None
methodName = ""
methodSchema = None
return interfaceName, methodName, methodSchema
class YourReferenceSlicer(slicer.BaseSlicer):
"""I handle pb.RemoteReference objects (being sent back home to the
original pb.Referenceable-holder)
"""
def slice(self, streamable, broker):
self.streamable = streamable
tracker = self.obj.tracker
if tracker.broker == broker:
# sending back to home broker
yield 'your-reference'
yield tracker.clid
else:
# sending somewhere else
assert isinstance(tracker.url, str)
giftID = broker.makeGift(self.obj)
yield 'their-reference'
yield giftID
yield tracker.url
def describe(self):
return "<your-ref-%s>" % self.obj.tracker.clid
registerAdapter(YourReferenceSlicer, RemoteReference, ipb.ISlicer)
class YourReferenceUnslicer(slicer.LeafUnslicer):
"""I accept incoming (integer) your-reference sequences and try to turn
them back into the original Referenceable. I also accept (string)
your-reference sequences and try to turn them into a published
Referenceable that they did not have access to before."""
clid = None
def checkToken(self, typebyte, size):
if typebyte != tokens.INT:
raise BananaError("your-reference ID must be an INT")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, defer.Deferred)
assert ready_deferred is None
self.clid = obj
def receiveClose(self):
if self.clid is None:
raise BananaError("sequence ended too early")
obj = self.broker.getMyReferenceByCLID(self.clid)
if not obj:
raise Violation("unknown clid '%s'" % self.clid)
return obj, None
def describe(self):
return "<your-ref-%s>" % self.obj.refID
class TheirReferenceUnslicer(slicer.LeafUnslicer):
"""I accept gifts of third-party references. This is turned into a live
reference upon receipt."""
# (their-reference, giftID, URL)
state = 0
giftID = None
url = None
urlConstraint = StringConstraint(200)
def checkToken(self, typebyte, size):
if self.state == 0:
if typebyte != tokens.INT:
raise BananaError("their-reference giftID must be an INT")
elif self.state == 1:
self.urlConstraint.checkToken(typebyte, size)
else:
raise Violation("too many parameters in their-reference")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, defer.Deferred)
assert ready_deferred is None
if self.state == 0:
self.giftID = obj
self.state = 1
elif self.state == 1:
# URL
self.url = obj
self.state = 2
else:
raise BananaError("Too many their-reference parameters")
def receiveClose(self):
if self.giftID is None or self.url is None:
raise BananaError("sequence ended too early")
d = self.broker.tub.getReference(self.url)
d.addBoth(self.ackGift)
# we return a Deferred that will fire with the RemoteReference when
# it becomes available. The RemoteReference is not even referenceable
# until then.
return d,None
def ackGift(self, rref):
rb = self.broker.remote_broker
# if we lose the connection, they'll decref the gift anyway
rb.callRemoteOnly("decgift", giftID=self.giftID, count=1)
return rref
def describe(self):
if self.giftID is None:
return "<gift-?>"
return "<gift-%s>" % self.giftID
class SturdyRef(Copyable, RemoteCopy):
"""I am a pointer to a Referenceable that lives in some (probably remote)
Tub. This pointer is long-lived, however you cannot send messages with it
directly. To use it, you must ask your Tub to turn it into a
RemoteReference with tub.getReference(sturdyref).
The SturdyRef is associated with a URL: you can create a SturdyRef out of
a URL that you obtain from some other source, and you can ask the
SturdyRef for its URL.
SturdyRefs are serialized by copying their URL, and create an identical
SturdyRef on the receiving side."""
typeToCopy = copytype = "foolscap.SturdyRef"
encrypted = False
tubID = None
location = None
locationHints = []
name = None
def __init__(self, url=None):
if url:
# pb://key@{ip:port,host:port,[ipv6]:port}[/unix]/swissnumber
# i.e. pb://tubID@{locationHints..}/name
#
# it can live at any one of a variety of network-accessible
# locations, or at a single UNIX-domain socket.
#
# there is also an unauthenticated form, which is indexed by the
# single locationHint, because it does not have a TubID
if url.startswith("pb://"):
self.encrypted = True
url = url[len("pb://"):]
slash = url.rfind("/")
self.name = url[slash+1:]
at = url.find("@")
if at != -1:
self.tubID = url[:at]
self.locationHints = url[at+1:slash].split(",")
elif url.startswith("pbu://"):
self.encrypted = False
url = url[len("pbu://"):]
slash = url.rfind("/")
self.name = url[slash+1:]
self.tubID = None
self.location = url[:slash]
else:
raise ValueError("unknown PB-URL prefix in '%s'" % url)
def getTubRef(self):
if self.encrypted:
return TubRef(self.tubID, self.locationHints)
return NoAuthTubRef(self.location)
def getURL(self):
if self.encrypted:
return ("pb://" + self.tubID + "@" +
",".join(self.locationHints) +
"/" + self.name)
return "pbu://" + self.location + "/" + self.name
def __str__(self):
return self.getURL()
def _distinguishers(self):
"""Two SturdyRefs are equivalent if they point to the same object.
SturdyRefs to encrypted Tubs only pay attention to the TubID and the
reference name. SturdyRefs to unauthenticated Tubs must use the
location hint instead of the (missing) TubID. This method makes it
easier to compare a pair of SturdyRefs."""
if self.encrypted:
return (True, self.tubID, self.name)
return (False, self.location, self.name)
def __hash__(self):
return hash(self._distinguishers())
def __cmp__(self, them):
return (cmp(type(self), type(them)) or
cmp(self.__class__, them.__class__) or
cmp(self._distinguishers(), them._distinguishers()))
def asLiveRef(self):
"""Return an object that can be sent over the wire and unserialized
as a live RemoteReference on the far end. Use this when you have a
SturdyRef and want to give someone a reference to its target, but
when you haven't bothered to acquire your own live reference to it."""
return _AsLiveRef(self)
class _AsLiveRef:
implements(ipb.ISlicer)
def __init__(self, sturdy):
self.target = sturdy
def slice(self, streamable, banana):
yield 'their-reference'
yield giftID
yield self.target.getURL()
yield [] # interfacenames
class TubRef:
"""This is a little helper class which provides a comparable identifier
for Tubs. TubRefs can be used as keys in dictionaries that track
connections to remote Tubs."""
encrypted = True
def __init__(self, tubID, locationHints=None):
self.tubID = tubID
self.locationHints = locationHints
def getLocations(self):
return self.locationHints
def getTubID(self):
return self.tubID
def __str__(self):
return "pb://" + self.tubID
def _distinguishers(self):
"""This serves the same purpose as SturdyRef._distinguishers."""
return (self.tubID,)
def __hash__(self):
return hash(self._distinguishers())
def __cmp__(self, them):
return (cmp(type(self), type(them)) or
cmp(self.__class__, them.__class__) or
cmp(self._distinguishers(), them._distinguishers()))
class NoAuthTubRef(TubRef):
# this is only used on outbound connections
encrypted = False
def __init__(self, location):
self.location = location
def getLocations(self):
return [self.location]
def getTubID(self):
return "<unauth>"
def __str__(self):
return "pbu://" + self.location
def _distinguishers(self):
"""This serves the same purpose as SturdyRef._distinguishers."""
return (self.location,)

View File

@ -0,0 +1,408 @@
import types, inspect
from zope.interface import interface, providedBy, implements
from foolscap.constraint import Constraint, OpenerConstraint, nothingTaster, \
IConstraint, UnboundedSchema, IRemoteMethodConstraint, Optional, Any
from foolscap.tokens import Violation, InvalidRemoteInterface
from foolscap.schema import addToConstraintTypeMap
from foolscap import ipb
class RemoteInterfaceClass(interface.InterfaceClass):
"""This metaclass lets RemoteInterfaces be a lot like Interfaces. The
methods are parsed differently (PB needs more information from them than
z.i extracts, and the methods can be specified with a RemoteMethodSchema
directly).
RemoteInterfaces can accept the following additional attribute::
__remote_name__: can be set to a string to specify the globally-unique
name for this interface. This should be a URL in a
namespace you administer. If not set, defaults to the
fully qualified classname.
RIFoo.names() returns the list of remote method names.
RIFoo['bar'] is still used to get information about method 'bar', however
it returns a RemoteMethodSchema instead of a z.i Method instance.
"""
def __init__(self, iname, bases=(), attrs=None, __module__=None):
if attrs is None:
interface.InterfaceClass.__init__(self, iname, bases, attrs,
__module__)
return
# parse (and remove) the attributes that make this a RemoteInterface
try:
rname, remote_attrs = self._parseRemoteInterface(iname, attrs)
except:
raise
# now let the normal InterfaceClass do its thing
interface.InterfaceClass.__init__(self, iname, bases, attrs,
__module__)
# now add all the remote methods that InterfaceClass would have
# complained about. This is really gross, and it really makes me
# question why we're bothing to inherit from z.i.Interface at all. I
# will probably stop doing that soon, and just have our own
# meta-class, but I want to make sure you can still do
# 'implements(RIFoo)' from within a class definition.
a = getattr(self, "_InterfaceClass__attrs") # the ickiest part
a.update(remote_attrs)
self.__remote_name__ = rname
# finally, auto-register the interface
try:
registerRemoteInterface(self, rname)
except:
raise
def _parseRemoteInterface(self, iname, attrs):
remote_attrs = {}
remote_name = attrs.get("__remote_name__", iname)
# and see if there is a __remote_name__ . We delete it because
# InterfaceClass doesn't like arbitrary attributes
if attrs.has_key("__remote_name__"):
del attrs["__remote_name__"]
# determine all remotely-callable methods
names = [name for name in attrs.keys()
if ((type(attrs[name]) == types.FunctionType and
not name.startswith("_")) or
IConstraint.providedBy(attrs[name]))]
# turn them into constraints. Tag each of them with their name and
# the RemoteInterface they came from.
for name in names:
m = attrs[name]
if not IConstraint.providedBy(m):
m = RemoteMethodSchema(method=m)
m.name = name
m.interface = self
remote_attrs[name] = m
# delete the methods, so zope's InterfaceClass doesn't see them.
# Particularly necessary for things defined with IConstraints.
del attrs[name]
return remote_name, remote_attrs
RemoteInterface = RemoteInterfaceClass("RemoteInterface",
__module__="pb.flavors")
def getRemoteInterface(obj):
"""Get the (one) RemoteInterface supported by the object, or None."""
interfaces = list(providedBy(obj))
# TODO: versioned Interfaces!
ilist = []
for i in interfaces:
if isinstance(i, RemoteInterfaceClass):
if i not in ilist:
ilist.append(i)
assert len(ilist) <= 1, "don't use multiple RemoteInterfaces! %s" % (obj,)
if ilist:
return ilist[0]
return None
class DuplicateRemoteInterfaceError(Exception):
pass
RemoteInterfaceRegistry = {}
def registerRemoteInterface(iface, name=None):
if not name:
name = iface.__remote_name__
assert isinstance(iface, RemoteInterfaceClass)
if RemoteInterfaceRegistry.has_key(name):
old = RemoteInterfaceRegistry[name]
msg = "remote interface %s was registered with the same name (%s) as %s, please use __remote_name__ to provide a unique name" % (old, name, iface)
raise DuplicateRemoteInterfaceError(msg)
RemoteInterfaceRegistry[name] = iface
def getRemoteInterfaceByName(iname):
return RemoteInterfaceRegistry.get(iname)
class RemoteMethodSchema:
"""
This is a constraint for a single remotely-invokable method. It gets to
require, deny, or impose further constraints upon a set of named
arguments.
This constraint is created by using keyword arguments with the same
names as the target method's arguments. Two special names are used:
__ignoreUnknown__: if True, unexpected argument names are silently
dropped. (note that this makes the schema unbounded)
__acceptUnknown__: if True, unexpected argument names are always
accepted without a constraint (which also makes this schema unbounded)
The remotely-accesible object's .getMethodSchema() method may return one
of these objects.
"""
implements(IRemoteMethodConstraint)
taster = {} # this should not be used as a top-level constraint
opentypes = [] # overkill
ignoreUnknown = False
acceptUnknown = False
name = None # method name, set when the RemoteInterface is parsed
interface = None # points to the RemoteInterface which defines the method
# under development
def __init__(self, method=None, _response=None, __options=[], **kwargs):
if method:
self.initFromMethod(method)
return
self.argumentNames = []
self.argConstraints = {}
self.required = []
self.responseConstraint = None
# __response in the argslist gets treated specially, I think it is
# mangled into _RemoteMethodSchema__response or something. When I
# change it to use _response instead, it works.
if _response:
self.responseConstraint = IConstraint(_response)
self.options = {} # return, wait, reliable, etc
if kwargs.has_key("__ignoreUnknown__"):
self.ignoreUnknown = kwargs["__ignoreUnknown__"]
del kwargs["__ignoreUnknown__"]
if kwargs.has_key("__acceptUnknown__"):
self.acceptUnknown = kwargs["__acceptUnknown__"]
del kwargs["__acceptUnknown__"]
for argname, constraint in kwargs.items():
self.argumentNames.append(argname)
constraint = IConstraint(constraint)
self.argConstraints[argname] = constraint
if not isinstance(constraint, Optional):
self.required.append(argname)
def initFromMethod(self, method):
# call this with the Interface's prototype method: the one that has
# argument constraints expressed as default arguments, and which
# does nothing but returns the appropriate return type
names, _, _, typeList = inspect.getargspec(method)
if names and names[0] == 'self':
why = "RemoteInterface methods should not have 'self' in their argument list"
raise InvalidRemoteInterface(why)
if not names:
typeList = []
if len(names) != len(typeList):
# TODO: relax this, use schema=Any for the args that don't have
# default values. This would make:
# def foo(a, b=int): return None
# equivalent to:
# def foo(a=Any, b=int): return None
why = "RemoteInterface methods must have default values for all their arguments"
raise InvalidRemoteInterface(why)
self.argumentNames = names
self.argConstraints = {}
self.required = []
for i in range(len(names)):
argname = names[i]
constraint = typeList[i]
if not isinstance(constraint, Optional):
self.required.append(argname)
self.argConstraints[argname] = IConstraint(constraint)
# call the method, its 'return' value is the return constraint
self.responseConstraint = IConstraint(method())
self.options = {} # return, wait, reliable, etc
def getPositionalArgConstraint(self, argnum):
if argnum >= len(self.argumentNames):
raise Violation("too many positional arguments: %d >= %d" %
(argnum, len(self.argumentNames)))
argname = self.argumentNames[argnum]
c = self.argConstraints.get(argname)
assert c
if isinstance(c, Optional):
c = c.constraint
return (True, c)
def getKeywordArgConstraint(self, argname,
num_posargs=0, previous_kwargs=[]):
previous_args = self.argumentNames[:num_posargs]
for pkw in previous_kwargs:
assert pkw not in previous_args
previous_args.append(pkw)
if argname in previous_args:
raise Violation("got multiple values for keyword argument '%s'"
% (argname,))
c = self.argConstraints.get(argname)
if c:
if isinstance(c, Optional):
c = c.constraint
return (True, c)
# what do we do with unknown arguments?
if self.ignoreUnknown:
return (False, None)
if self.acceptUnknown:
return (True, None)
raise Violation("unknown argument '%s'" % argname)
def getResponseConstraint(self):
return self.responseConstraint
def checkAllArgs(self, args, kwargs, inbound):
# first we map the positional arguments
allargs = {}
if len(args) > len(self.argumentNames):
raise Violation("method takes %d positional arguments (%d given)"
% (len(self.argumentNames), len(args)))
for i,argvalue in enumerate(args):
allargs[self.argumentNames[i]] = argvalue
for argname,argvalue in kwargs.items():
if argname in allargs:
raise Violation("got multiple values for keyword argument '%s'"
% (argname,))
allargs[argname] = argvalue
for argname, argvalue in allargs.items():
accept, constraint = self.getKeywordArgConstraint(argname)
if not accept:
# this argument will be ignored by the far end. TODO: emit a
# warning
pass
try:
constraint.checkObject(argvalue, inbound)
except Violation, v:
v.setLocation("%s=" % argname)
raise
for argname in self.required:
if argname not in allargs:
raise Violation("missing required argument '%s'" % argname)
def checkResults(self, results, inbound):
if self.responseConstraint:
# this might raise a Violation. The caller will annotate its
# location appropriately: they have more information than we do.
self.responseConstraint.checkObject(results, inbound)
def maxSize(self, seen=None):
if self.acceptUnknown:
raise UnboundedSchema # there is no limit on that thing
if self.ignoreUnknown:
# for now, we ignore unknown arguments by accepting the object
# and then throwing it away. This makes us vulnerable to the
# memory consumed by that object. TODO: in the CallUnslicer,
# arrange to discard the ignored object instead of receiving it.
# When this is done, ignoreUnknown will not cause the schema to
# be unbounded and this clause should be removed.
raise UnboundedSchema
# TODO: implement the rest of maxSize, just like a dictionary
raise NotImplementedError
class UnconstrainedMethod:
"""I am a method constraint that accepts any arguments and any return
value.
To use this, assign it to a method name in a RemoteInterface::
class RIFoo(RemoteInterface):
def constrained_method(foo=int, bar=str): # this one is constrained
return str
not_method = UnconstrainedMethod() # this one is not
"""
implements(IRemoteMethodConstraint)
def getPositionalArgConstraint(self, argnum):
return (True, Any())
def getKeywordArgConstraint(self, argname, num_posargs=0,
previous_kwargs=[]):
return (True, Any())
def checkAllArgs(self, args, kwargs, inbound):
pass # accept everything
def getResponseConstraint(self):
return Any()
def checkResults(self, results, inbound):
pass # accept everything
class LocalInterfaceConstraint(Constraint):
"""This constraint accepts any (local) instance which implements the
given local Interface.
"""
# TODO: maybe accept RemoteCopy instances
# TODO: accept inbound your-references, if the local object they map to
# implements the interface
# TODO: do we need an string-to-Interface map just like we have a
# classname-to-class/factory map?
taster = nothingTaster
opentypes = []
name = "LocalInterfaceConstraint"
def __init__(self, interface):
self.interface = interface
def checkObject(self, obj, inbound):
# TODO: maybe try to get an adapter instead?
if not self.interface.providedBy(obj):
raise Violation("'%s' does not provide interface %s"
% (obj, self.interface))
class RemoteInterfaceConstraint(OpenerConstraint):
"""This constraint accepts any RemoteReference that claims to be
associated with a remote Referenceable that implements the given
RemoteInterface. If 'interface' is None, just assert that it is a
RemoteReference at all.
"""
opentypes = [("my-reference",)]
# TODO: accept their-references too
name = "RemoteInterfaceConstraint"
def __init__(self, interface):
self.interface = interface
def checkObject(self, obj, inbound):
if inbound:
# this ought to be a RemoteReference that claims to be associated
# with a remote Referenceable that implements the desired
# interface.
if not ipb.IRemoteReference.providedBy(obj):
raise Violation("'%s' does not provide RemoteInterface %s, "
"and doesn't even look like a RemoteReference"
% (obj, self.interface))
if not self.interface:
return
iface = obj.tracker.interface
# TODO: this test probably doesn't handle subclasses of
# RemoteInterface, which might be useful (if it even works)
if not iface or iface != self.interface:
raise Violation("'%s' does not provide RemoteInterface %s"
% (obj, self.interface))
else:
# this ought to be a Referenceable which implements the desired
# interface
if not ipb.IReferenceable.providedBy(obj):
# TODO: maybe distinguish between OnlyReferenceable and
# Referenceable? which is more useful here?
raise Violation("'%s' is not a Referenceable" % (obj,))
if self.interface and not self.interface.providedBy(obj):
raise Violation("'%s' does not provide RemoteInterface %s"
% (obj, self.interface))
def _makeConstraint(t):
# This will be called for both local interfaces (IFoo) and remote
# interfaces (RIFoo), so we have to distinguish between them. The late
# import is to deal with a circular reference between this module and
# remoteinterface.py
if isinstance(t, RemoteInterfaceClass):
return RemoteInterfaceConstraint(t)
return LocalInterfaceConstraint(t)
addToConstraintTypeMap(interface.InterfaceClass, _makeConstraint)

View File

@ -0,0 +1,181 @@
# This module contains all user-visible Constraint subclasses, for
# convenience by user code which is defining RemoteInterfaces. The primitive
# ones are defined in constraint.py, while the constraints associated with
# specific open sequences (list, unicode, etc) are defined in the related
# slicer/list.py module, etc. A few are defined here.
# It also defines the constraintMap and constraintTypeMap, used when
# constructing constraints out of the convenience shorthand. This is used
# when processing the methods defined in a RemoteInterface (such that a
# default argument like x=int gets turned into an IntegerConstraint). New
# slicers that want to add to these mappings can use addToConstraintTypeMap
# or manipulate constraintMap directly.
# this imports slicers and constraints.py, but is not allowed to import any
# other Foolscap modules, to avoid import cycles.
"""
primitive constraints:
- types.StringType: string with maxLength=1k
- String(maxLength=1000): string with arbitrary maxLength
- types.BooleanType: boolean
- types.IntType: integer that fits in s_int32_t
- types.LongType: integer with abs(num) < 2**8192 (fits in 1024 bytes)
- Int(maxBytes=1024): integer with arbitrary maxValue=2**(8*maxBytes)
- types.FloatType: number
- Number(maxBytes=1024): float or integer with maxBytes
- interface: instance which implements (or adapts to) the Interface
- class: instance of the class or a subclass
- # unicode? types? none?
container constraints:
- TupleOf(constraint1, constraint2..): fixed size, per-element constraint
- ListOf(constraint, maxLength=30): all elements obey constraint
- DictOf(keyconstraint, valueconstraint): keys and values obey constraints
- AttributeDict(*attrTuples, ignoreUnknown=False):
- attrTuples are (name, constraint)
- ignoreUnknown=True means that received attribute names which aren't
listed in attrTuples should be ignored instead of raising an
UnknownAttrName exception
composite constraints:
- tuple: alternatives: must obey one of the different constraints
modifiers:
- Shared(constraint, refLimit=None): object may be referenced multiple times
within the serialization domain (question: which domain?). All
constraints default to refLimit=1, and a MultiplyReferenced exception
is raised as soon as the reference count goes above the limit.
refLimit=None means no limit is enforced.
- Optional(name, constraint, default=None): key is not required. If not
provided and default is None, key/attribute will not be created
Only valid inside DictOf and AttributeDict.
"""
from foolscap.tokens import Violation, UnknownSchemaType
# make constraints available in a single location
from foolscap.constraint import Constraint, Any, StringConstraint, \
IntegerConstraint, NumberConstraint, \
UnboundedSchema, IConstraint, Optional, Shared
from foolscap.slicers.bool import BooleanConstraint
from foolscap.slicers.dict import DictConstraint
from foolscap.slicers.list import ListConstraint
from foolscap.slicers.set import SetConstraint
from foolscap.slicers.tuple import TupleConstraint
from foolscap.slicers.none import Nothing
# we don't import RemoteMethodSchema from remoteinterface.py, because
# remoteinterface.py needs to import us (for addToConstraintTypeMap)
ignored = [Constraint, Any, StringConstraint, IntegerConstraint,
NumberConstraint, BooleanConstraint, DictConstraint,
ListConstraint, SetConstraint, TupleConstraint, Nothing,
Optional, Shared,
] # hush pyflakes
# convenience shortcuts
TupleOf = TupleConstraint
ListOf = ListConstraint
DictOf = DictConstraint
SetOf = SetConstraint
class PolyConstraint(Constraint):
name = "PolyConstraint"
def __init__(self, *alternatives):
self.alternatives = [IConstraint(a) for a in alternatives]
self.alternatives = tuple(self.alternatives)
# TODO: taster/opentypes should be a union of the alternatives'
def checkObject(self, obj, inbound):
ok = False
for c in self.alternatives:
try:
c.checkObject(obj, inbound)
ok = True
except Violation:
pass
if not ok:
raise Violation("does not satisfy any of %s" \
% (self.alternatives,))
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
# TODO: if the PolyConstraint contains itself directly, the effect
# is a nop. If a descendent contains the ancestor PolyConstraint,
# then I think it's unbounded.. must draw this out
raise UnboundedSchema # recursion
seen.append(self)
return reduce(max, [c.maxSize(seen[:])
for c in self.alternatives])
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return reduce(max, [c.maxDepth(seen[:]) for c in self.alternatives])
ChoiceOf = PolyConstraint
constraintMap = {
str: StringConstraint(),
bool: BooleanConstraint(),
int: IntegerConstraint(),
long: IntegerConstraint(maxBytes=1024),
float: NumberConstraint(),
None: Nothing(),
}
# This module provides a function named addToConstraintTypeMap() which helps
# to resolve some import cycles.
constraintTypeMap = []
def addToConstraintTypeMap(typ, constraintMaker):
constraintTypeMap.insert(0, (typ, constraintMaker))
def _tupleConstraintMaker(t):
return TupleConstraint(*t)
addToConstraintTypeMap(tuple, _tupleConstraintMaker)
# this function transforms the simple syntax (as used in RemoteInterface
# method definitions) into Constraint instances. This function is registered
# as a zope.interface adapter hook, so that once we've been loaded, other
# code can just do IConstraint(stuff) and expect it to work.
def adapt_obj_to_iconstraint(iface, t):
if iface is not IConstraint:
return None
assert not IConstraint.providedBy(t) # not sure about this
c = constraintMap.get(t, None)
if c:
return c
for (typ, constraintMaker) in constraintTypeMap:
if isinstance(t, typ):
c = constraintMaker(t)
if c:
return c
# RIFoo means accept either a Referenceable that implements RIFoo, or a
# RemoteReference that points to just such a Referenceable. This is
# hooked in by remoteinterface.py, when it calls addToConstraintTypeMap
# we are the only way to make constraints
raise UnknownSchemaType("can't make constraint from '%s' (%s)" %
(t, type(t)))
from zope.interface.interface import adapter_hooks
adapter_hooks.append(adapt_obj_to_iconstraint)
# how to accept "([(ref0" ?
# X = "TupleOf(ListOf(TupleOf(" * infinity
# ok, so you can't write a constraint that accepts it. I'm ok with that.

View File

@ -0,0 +1,306 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.python.components import registerAdapter
from zope.interface import implements
from twisted.internet.defer import Deferred
import tokens
from tokens import Violation, BananaError
class SlicerClass(type):
# auto-register Slicers
def __init__(self, name, bases, dict):
type.__init__(self, name, bases, dict)
typ = dict.get('slices')
#reg = dict.get('slicerRegistry')
if typ:
registerAdapter(self, typ, tokens.ISlicer)
class BaseSlicer:
__metaclass__ = SlicerClass
implements(tokens.ISlicer)
slices = None
parent = None
sendOpen = True
opentype = ()
trackReferences = False
def __init__(self, obj):
# this simplifies Slicers which are adapters
self.obj = obj
def registerReference(self, refid, obj):
# optimize: most Slicers will delegate this up to the Root
return self.parent.registerReference(refid, obj)
def slicerForObject(self, obj):
# optimize: most Slicers will delegate this up to the Root
return self.parent.slicerForObject(obj)
def slice(self, streamable, banana):
# this is what makes us ISlicer
self.streamable = streamable
assert self.opentype
for o in self.opentype:
yield o
for t in self.sliceBody(streamable, banana):
yield t
def sliceBody(self, streamable, banana):
raise NotImplementedError
def childAborted(self, f):
return f
def describe(self):
return "??"
class ScopedSlicer(BaseSlicer):
"""This Slicer provides a containing scope for referenceable things like
lists. The same list will not be serialized twice within this scope, but
it will not survive outside it."""
def __init__(self, obj):
BaseSlicer.__init__(self, obj)
self.references = {} # maps id(obj) -> (obj,refid)
def registerReference(self, refid, obj):
# keep references here, not in the actual PBRootSlicer
# This use of id(obj) requires a bit of explanation. We are making
# the assumption that the object graph remains unmodified until
# serialization is complete. In particular, we assume that all the
# objects in it remain alive, and no new objects are added to it,
# until serialization is complete. id(obj) is only unique for live
# objects: once the object is garbage-collected, a new object may be
# created with the same id(obj) value.
#
# The concern is that a custom Slicer will call something that
# mutates the object graph before it has finished being serialized.
# This might be one which calls some user-level function during
# Slicing, or one which uses a Deferred to put off serialization for
# a while, creating an opportunity for some other code to get
# control.
# The specific concern is that if, in the middle of serialization, an
# object that was already serialized is gc'ed, and a new object is
# created and attached to a portion of the object graph that hasn't
# been serialized yet, and if the new object gets the same id(obj) as
# the dead object, then we could be tricked into sending the
# reference number of the old (dead) object. On the receiving end,
# this would result in a mangled object graph.
# User code isn't supposed to allow the object graph to change during
# serialization, so this mangling "should not happen" under normal
# circumstances. However, as a reasonably cheap way to mitigate the
# worst sort of mangling when user code *does* mess up,
# self.references maps from id(obj) to a tuple of (obj,refid) instead
# of just the refid. This insures that the object will stay alive
# until the ScopedSlicer dies, guaranteeing that we won't get
# duplicate id(obj) values. If user code mutates the object graph
# during serialization we might still get inconsistent results, but
# they'll be the ordinary kind of inconsistent results (snapshots of
# different branches of the object graph at different points in time)
# rather than the blatantly wrong mangling that would occur with
# re-used id(obj) values.
self.references[id(obj)] = (obj,refid)
def slicerForObject(self, obj):
# check for an object which was sent previously or has at least
# started sending
obj_refid = self.references.get(id(obj), None)
if obj_refid is not None:
# we've started to send this object already, so just include a
# reference to it
return ReferenceSlicer(obj_refid[1])
# otherwise go upstream so we can serialize the object completely
return self.parent.slicerForObject(obj)
UnslicerRegistry = {}
BananaUnslicerRegistry = {}
def registerUnslicer(opentype, factory, registry=None):
if registry is None:
registry = UnslicerRegistry
assert not registry.has_key(opentype)
registry[opentype] = factory
class UnslicerClass(type):
# auto-register Unslicers
def __init__(self, name, bases, dict):
type.__init__(self, name, bases, dict)
opentype = dict.get('opentype')
reg = dict.get('unslicerRegistry')
if opentype:
registerUnslicer(opentype, self, reg)
class BaseUnslicer:
__metaclass__ = UnslicerClass
opentype = None
implements(tokens.IUnslicer)
def __init__(self):
pass
def describe(self):
return "??"
def setConstraint(self, constraint):
pass
def start(self, count):
pass
def checkToken(self, typebyte, size):
return # no restrictions
def openerCheckToken(self, typebyte, size, opentype):
return self.parent.openerCheckToken(typebyte, size, opentype)
def open(self, opentype):
"""Return an IUnslicer object based upon the 'opentype' tuple.
Subclasses that wish to change the way opentypes are mapped to
Unslicers can do so by changing this behavior.
This method does not apply constraints, it only serves to map
opentype into Unslicer. Most subclasses will implement this by
delegating the request to their parent (and thus, eventually, to the
RootUnslicer), and will set the new child's .opener attribute so
that they can do the same. Subclasses that wish to change the way
opentypes are mapped to Unslicers can do so by changing this
behavior."""
return self.parent.open(opentype)
def doOpen(self, opentype):
"""Return an IUnslicer object based upon the 'opentype' tuple. This
object will receive all tokens destined for the subnode.
If you want to enforce a constraint, you must override this method
and do two things: make sure your constraint accepts the opentype,
and set a per-item constraint on the new child unslicer.
This method gets the IUnslicer from our .open() method. That might
return None instead of a child unslicer if the they want a
multi-token opentype tuple, so be sure to check for Noneness before
adding a per-item constraint.
"""
return self.open(opentype)
def receiveChild(self, obj, ready_deferred=None):
pass
def reportViolation(self, why):
return why
def receiveClose(self):
raise NotImplementedError
def finish(self):
pass
def setObject(self, counter, obj):
"""To pass references to previously-sent objects, the [OPEN,
'reference', number, CLOSE] sequence is used. The numbers are
generated implicitly by the sending Banana, counting from 0 for the
object described by the very first OPEN sent over the wire,
incrementing for each subsequent one. The objects themselves are
stored in any/all Unslicers who cares to. Generally this is the
RootUnslicer, but child slices could do it too if they wished.
"""
# TODO: examine how abandoned child objects could mess up this
# counter
pass
def getObject(self, counter):
"""'None' means 'ask our parent instead'.
"""
return None
def explode(self, failure):
"""If something goes wrong in a Deferred callback, it may be too
late to reject the token and to normal error handling. I haven't
figured out how to do sensible error-handling in this situation.
This method exists to make sure that the exception shows up
*somewhere*. If this is called, it is also likely that a placeholder
(probably a Deferred) will be left in the unserialized object about
to be handed to the RootUnslicer.
"""
print "KABOOM"
print failure
self.protocol.exploded = failure
class ScopedUnslicer(BaseUnslicer):
"""This Unslicer provides a containing scope for referenceable things
like lists. It corresponds to the ScopedSlicer base class."""
def __init__(self):
BaseUnslicer.__init__(self)
self.references = {}
def setObject(self, counter, obj):
if self.protocol.debugReceive:
print "setObject(%s): %s{%s}" % (counter, obj, id(obj))
self.references[counter] = obj
def getObject(self, counter):
obj = self.references.get(counter)
if self.protocol.debugReceive:
print "getObject(%s) -> %s{%s}" % (counter, obj, id(obj))
return obj
class LeafUnslicer(BaseUnslicer):
# inherit from this to reject any child nodes
# .checkToken in LeafUnslicer subclasses should reject OPEN tokens
def doOpen(self, opentype):
raise Violation("'%s' does not accept sub-objects" % self)
# References are special enough to put here instead of slicers/
class ReferenceSlicer(BaseSlicer):
# this is created explicitly, not as an adapter
opentype = ('reference',)
trackReferences = False
def __init__(self, refid):
assert type(refid) is int
self.refid = refid
def sliceBody(self, streamable, banana):
yield self.refid
class ReferenceUnslicer(LeafUnslicer):
opentype = ('reference',)
constraint = None
finished = False
def setConstraint(self, constraint):
self.constraint = constraint
def checkToken(self, typebyte,size):
if typebyte != tokens.INT:
raise BananaError("ReferenceUnslicer only accepts INTs")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.finished:
raise BananaError("ReferenceUnslicer only accepts one int")
self.obj = self.protocol.getObject(obj)
self.finished = True
# assert that this conforms to the constraint
if self.constraint:
self.constraint.checkObject(self.obj, True)
# TODO: it might be a Deferred, but we should know enough about the
# incoming value to check the constraint. This requires a subclass
# of Deferred which can give us the metadata.
def receiveClose(self):
return self.obj, None

View File

@ -0,0 +1,36 @@
######################## Slicers+Unslicers
# note that Slicing is always easier than Unslicing, because Unslicing
# is the side where you are dealing with the danger
from foolscap.slicers.none import NoneSlicer, NoneUnslicer
from foolscap.slicers.bool import BooleanSlicer, BooleanUnslicer
from foolscap.slicers.unicode import UnicodeSlicer, UnicodeUnslicer
from foolscap.slicers.list import ListSlicer, ListUnslicer
from foolscap.slicers.tuple import TupleSlicer, TupleUnslicer
from foolscap.slicers.set import SetSlicer, SetUnslicer
from foolscap.slicers.set import ImmutableSetSlicer, ImmutableSetUnslicer
#from foolscap.slicers.set import BuiltinSetSlicer
from foolscap.slicers.dict import DictSlicer, DictUnslicer, OrderedDictSlicer
from foolscap.slicers.vocab import ReplaceVocabSlicer, ReplaceVocabUnslicer
from foolscap.slicers.vocab import ReplaceVocabularyTable, AddToVocabularyTable
from foolscap.slicers.vocab import AddVocabSlicer, AddVocabUnslicer
from foolscap.slicers.root import RootSlicer, RootUnslicer
# appease pyflakes
unused = [
NoneSlicer, NoneUnslicer,
BooleanSlicer, BooleanUnslicer,
UnicodeSlicer, UnicodeUnslicer,
ListSlicer, ListUnslicer,
TupleSlicer, TupleUnslicer,
SetSlicer, SetUnslicer,
ImmutableSetSlicer, ImmutableSetUnslicer,
#from foolscap.slicers.set import BuiltinSetSlicer
DictSlicer, DictUnslicer, OrderedDictSlicer,
ReplaceVocabSlicer, ReplaceVocabUnslicer,
ReplaceVocabularyTable, AddToVocabularyTable,
AddVocabSlicer, AddVocabUnslicer,
RootSlicer, RootUnslicer,
]

View File

@ -0,0 +1,80 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.python.components import registerAdapter
from twisted.internet.defer import Deferred
from foolscap import tokens
from foolscap.tokens import Violation, BananaError
from foolscap.slicer import BaseSlicer, LeafUnslicer
from foolscap.constraint import OpenerConstraint, IntegerConstraint, Any
class BooleanSlicer(BaseSlicer):
opentype = ('boolean',)
trackReferences = False
def sliceBody(self, streamable, banana):
if self.obj:
yield 1
else:
yield 0
registerAdapter(BooleanSlicer, bool, tokens.ISlicer)
class BooleanUnslicer(LeafUnslicer):
opentype = ('boolean',)
value = None
constraint = None
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, BooleanConstraint)
self.constraint = constraint
def checkToken(self, typebyte, size):
if typebyte != tokens.INT:
raise BananaError("BooleanUnslicer only accepts an INT token")
if self.value != None:
raise BananaError("BooleanUnslicer only accepts one token")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
assert type(obj) == int
if self.constraint:
if self.constraint.value != None:
if bool(obj) != self.constraint.value:
raise Violation("This boolean can only be %s" % \
self.constraint.value)
self.value = bool(obj)
def receiveClose(self):
return self.value, None
def describe(self):
return "<bool>"
class BooleanConstraint(OpenerConstraint):
strictTaster = True
opentypes = [("boolean",)]
_myint = IntegerConstraint()
name = "BooleanConstraint"
def __init__(self, value=None):
# self.value is a joke. This allows you to use a schema of
# BooleanConstraint(True) which only accepts 'True'. I cannot
# imagine a possible use for this, but it made me laugh.
self.value = value
def checkObject(self, obj, inbound):
if type(obj) != bool:
raise Violation("not a bool")
if self.value != None:
if obj != self.value:
raise Violation("not %s" % self.value)
def maxSize(self, seen=None):
if not seen: seen = []
return self.OPENBYTES("boolean") + self._myint.maxSize(seen)
def maxDepth(self, seen=None):
if not seen: seen = []
return 1+self._myint.maxDepth(seen)

View File

@ -0,0 +1,162 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.python import log
from twisted.internet.defer import Deferred
from foolscap.tokens import Violation, BananaError
from foolscap.slicer import BaseSlicer, BaseUnslicer
from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint
class DictSlicer(BaseSlicer):
opentype = ('dict',)
trackReferences = True
slices = None
def sliceBody(self, streamable, banana):
for key,value in self.obj.items():
yield key
yield value
class DictUnslicer(BaseUnslicer):
opentype = ('dict',)
gettingKey = True
keyConstraint = None
valueConstraint = None
maxKeys = None
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, DictConstraint)
self.keyConstraint = constraint.keyConstraint
self.valueConstraint = constraint.valueConstraint
self.maxKeys = constraint.maxKeys
def start(self, count):
self.d = {}
self.protocol.setObject(count, self.d)
self.key = None
def checkToken(self, typebyte, size):
if self.maxKeys != None:
if len(self.d) >= self.maxKeys:
raise Violation("the dict is full")
if self.gettingKey:
if self.keyConstraint:
self.keyConstraint.checkToken(typebyte, size)
else:
if self.valueConstraint:
self.valueConstraint.checkToken(typebyte, size)
def doOpen(self, opentype):
if self.maxKeys != None:
if len(self.d) >= self.maxKeys:
raise Violation("the dict is full")
if self.gettingKey:
if self.keyConstraint:
self.keyConstraint.checkOpentype(opentype)
else:
if self.valueConstraint:
self.valueConstraint.checkOpentype(opentype)
unslicer = self.open(opentype)
if unslicer:
if self.gettingKey:
if self.keyConstraint:
unslicer.setConstraint(self.keyConstraint)
else:
if self.valueConstraint:
unslicer.setConstraint(self.valueConstraint)
return unslicer
def update(self, value, key):
# this is run as a Deferred callback, hence the backwards arguments
self.d[key] = value
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.gettingKey:
self.receiveKey(obj)
else:
self.receiveValue(obj)
self.gettingKey = not self.gettingKey
def receiveKey(self, key):
# I don't think it is legal (in python) to use an incomplete object
# as a dictionary key, because you must have all the contents to
# hash it. Someone could fake up a token stream to hit this case,
# however: OPEN(dict), OPEN(tuple), OPEN(reference), 0, CLOSE, CLOSE,
# "value", CLOSE
if isinstance(key, Deferred):
raise BananaError("incomplete object as dictionary key")
try:
if self.d.has_key(key):
raise BananaError("duplicate key '%s'" % key)
except TypeError:
raise BananaError("unhashable key '%s'" % key)
self.key = key
def receiveValue(self, value):
if isinstance(value, Deferred):
value.addCallback(self.update, self.key)
value.addErrback(log.err)
self.d[self.key] = value # placeholder
def receiveClose(self):
return self.d, None
def describe(self):
if self.gettingKey:
return "{}"
else:
return "{}[%s]" % self.key
class OrderedDictSlicer(DictSlicer):
slices = dict
def sliceBody(self, streamable, banana):
keys = self.obj.keys()
keys.sort()
for key in keys:
value = self.obj[key]
yield key
yield value
class DictConstraint(OpenerConstraint):
opentypes = [("dict",)]
name = "DictConstraint"
def __init__(self, keyConstraint, valueConstraint, maxKeys=30):
self.keyConstraint = IConstraint(keyConstraint)
self.valueConstraint = IConstraint(valueConstraint)
self.maxKeys = maxKeys
def checkObject(self, obj, inbound):
if not isinstance(obj, dict):
raise Violation, "'%s' (%s) is not a Dictionary" % (obj,
type(obj))
if self.maxKeys != None and len(obj) > self.maxKeys:
raise Violation, "Dict keys=%d > maxKeys=%d" % (len(obj),
self.maxKeys)
for key, value in obj.iteritems():
self.keyConstraint.checkObject(key, inbound)
self.valueConstraint.checkObject(value, inbound)
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
if self.maxKeys == None:
raise UnboundedSchema
keySize = self.keyConstraint.maxSize(seen[:])
valueSize = self.valueConstraint.maxSize(seen[:])
return self.OPENBYTES("dict") + self.maxKeys * (keySize + valueSize)
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
keyDepth = self.keyConstraint.maxDepth(seen[:])
valueDepth = self.valueConstraint.maxDepth(seen[:])
return 1 + max(keyDepth, valueDepth)

View File

@ -0,0 +1,146 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.python import log
from twisted.internet.defer import Deferred
from foolscap.tokens import Violation
from foolscap.slicer import BaseSlicer, BaseUnslicer
from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint
class ListSlicer(BaseSlicer):
opentype = ("list",)
trackReferences = True
slices = list
def sliceBody(self, streamable, banana):
for i in self.obj:
yield i
class ListUnslicer(BaseUnslicer):
opentype = ("list",)
maxLength = None
itemConstraint = None
debug = False
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, ListConstraint)
self.maxLength = constraint.maxLength
self.itemConstraint = constraint.constraint
def start(self, count):
#self.opener = foo # could replace it if we wanted to
self.list = []
self.count = count
if self.debug:
print "%s[%d].start with %s" % (self, self.count, self.list)
self.protocol.setObject(count, self.list)
def checkToken(self, typebyte, size):
if self.maxLength != None and len(self.list) >= self.maxLength:
# list is full, no more tokens accepted
# this is hit if the max+1 item is a primitive type
raise Violation("the list is full")
if self.itemConstraint:
self.itemConstraint.checkToken(typebyte, size)
def doOpen(self, opentype):
# decide whether the given object type is acceptable here. Raise a
# Violation exception if not, otherwise give it to our opener (which
# will normally be the RootUnslicer). Apply a constraint to the new
# unslicer.
if self.maxLength != None and len(self.list) >= self.maxLength:
# this is hit if the max+1 item is a non-primitive type
raise Violation("the list is full")
if self.itemConstraint:
self.itemConstraint.checkOpentype(opentype)
unslicer = self.open(opentype)
if unslicer:
if self.itemConstraint:
unslicer.setConstraint(self.itemConstraint)
return unslicer
def update(self, obj, index):
# obj has already passed typechecking
if self.debug:
print "%s[%d].update: [%d]=%s" % (self, self.count, index, obj)
assert isinstance(index, int)
self.list[index] = obj
return obj
def receiveChild(self, obj, ready_deferred=None):
assert ready_deferred is None
if self.debug:
print "%s[%d].receiveChild(%s)" % (self, self.count, obj)
# obj could be a primitive type, a Deferred, or a complex type like
# those returned from an InstanceUnslicer. However, the individual
# object has already been through the schema validation process. The
# only remaining question is whether the larger schema will accept
# it.
if self.maxLength != None and len(self.list) >= self.maxLength:
# this is redundant
# (if it were a non-primitive one, it would be caught in doOpen)
# (if it were a primitive one, it would be caught in checkToken)
raise Violation("the list is full")
if isinstance(obj, Deferred):
if self.debug:
print " adding my update[%d] to %s" % (len(self.list), obj)
obj.addCallback(self.update, len(self.list))
obj.addErrback(self.printErr)
self.list.append("placeholder")
else:
self.list.append(obj)
def printErr(self, why):
print "ERR!"
print why.getBriefTraceback()
log.err(why)
def receiveClose(self):
return self.list, None
def describe(self):
return "[%d]" % len(self.list)
class ListConstraint(OpenerConstraint):
"""The object must be a list of objects, with a given maximum length. To
accept lists of any length, use maxLength=None (but you will get a
UnboundedSchema warning). All member objects must obey the given
constraint."""
opentypes = [("list",)]
name = "ListConstraint"
def __init__(self, constraint, maxLength=30, minLength=0):
self.constraint = IConstraint(constraint)
self.maxLength = maxLength
self.minLength = minLength
def checkObject(self, obj, inbound):
if not isinstance(obj, list):
raise Violation("not a list")
if self.maxLength is not None and len(obj) > self.maxLength:
raise Violation("list too long")
if len(obj) < self.minLength:
raise Violation("list too short")
for o in obj:
self.constraint.checkObject(o, inbound)
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
if self.maxLength == None:
raise UnboundedSchema
return (self.OPENBYTES("list") +
self.maxLength * self.constraint.maxSize(seen))
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return 1 + self.constraint.maxDepth(seen)

View File

@ -0,0 +1,41 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from foolscap.tokens import Violation, BananaError
from foolscap.slicer import BaseSlicer, LeafUnslicer
from foolscap.constraint import OpenerConstraint
class NoneSlicer(BaseSlicer):
opentype = ('none',)
trackReferences = False
slices = type(None)
def sliceBody(self, streamable, banana):
# hmm, we need an empty generator. I think a sequence is the only way
# to accomplish this, other than 'if 0: yield' or something silly
return []
class NoneUnslicer(LeafUnslicer):
opentype = ('none',)
def checkToken(self, typebyte, size):
raise BananaError("NoneUnslicer does not accept any tokens")
def receiveClose(self):
return None, None
class Nothing(OpenerConstraint):
"""Accept only 'None'."""
strictTaster = True
opentypes = [("none",)]
name = "Nothing"
def checkObject(self, obj, inbound):
if obj is not None:
raise Violation("'%s' is not None" % (obj,))
def maxSize(self, seen=None):
if not seen: seen = []
return self.OPENBYTES("none")
def maxDepth(self, seen=None):
if not seen: seen = []
return 1

View File

@ -0,0 +1,211 @@
# -*- test-case-name: foolscap.test.test_banana -*-
import types
from zope.interface import implements
from twisted.internet.defer import Deferred
from foolscap import tokens
from foolscap.tokens import Violation, BananaError
from foolscap.slicer import BaseUnslicer
from foolscap.slicer import UnslicerRegistry, BananaUnslicerRegistry
from foolscap.slicers.vocab import ReplaceVocabularyTable, AddToVocabularyTable
class RootSlicer:
implements(tokens.ISlicer, tokens.IRootSlicer)
streamableInGeneral = True
producingDeferred = None
objectSentDeferred = None
slicerTable = {}
debug = False
def __init__(self, protocol):
self.protocol = protocol
self.sendQueue = []
def allowStreaming(self, streamable):
self.streamableInGeneral = streamable
def registerReference(self, refid, obj):
pass
def slicerForObject(self, obj):
# could use a table here if you think it'd be faster than an
# adapter lookup
if self.debug: print "slicerForObject(%s)" % type(obj)
# do the adapter lookup first, so that registered adapters override
# UnsafeSlicerTable's InstanceSlicer
slicer = tokens.ISlicer(obj, None)
if slicer:
if self.debug: print "got ISlicer", slicer
return slicer
slicerFactory = self.slicerTable.get(type(obj))
if slicerFactory:
if self.debug: print " got slicerFactory", slicerFactory
return slicerFactory(obj)
if issubclass(type(obj), types.InstanceType):
name = str(obj.__class__)
else:
name = str(type(obj))
if self.debug: print "cannot serialize %s (%s)" % (obj, name)
raise Violation("cannot serialize %s (%s)" % (obj, name))
def slice(self):
return self
def __iter__(self):
return self # we are our own iterator
def next(self):
if self.objectSentDeferred:
self.objectSentDeferred.callback(None)
self.objectSentDeferred = None
if self.sendQueue:
(obj, self.objectSentDeferred) = self.sendQueue.pop()
self.streamable = self.streamableInGeneral
return obj
if self.protocol.debugSend:
print "LAST BAG"
self.producingDeferred = Deferred()
self.streamable = True
return self.producingDeferred
def childAborted(self, f):
assert self.objectSentDeferred
self.objectSentDeferred.errback(f)
self.objectSentDeferred = None
return None
def send(self, obj):
# obj can also be a Slicer, say, a CallSlicer. We return a Deferred
# which fires when the object has been fully serialized.
idle = (len(self.protocol.slicerStack) == 1) and not self.sendQueue
objectSentDeferred = Deferred()
self.sendQueue.append((obj, objectSentDeferred))
if idle:
# wake up
if self.protocol.debugSend:
print " waking up to send"
if self.producingDeferred:
d = self.producingDeferred
self.producingDeferred = None
# TODO: consider reactor.callLater(0, d.callback, None)
# I'm not sure it's actually necessary, though
d.callback(None)
return objectSentDeferred
def describe(self):
return "<RootSlicer>"
def connectionLost(self, why):
# abandon everything we wanted to send
if self.objectSentDeferred:
self.objectSentDeferred.errback(why)
self.objectSentDeferred = None
for obj, d in self.sendQueue:
d.errback(why)
self.sendQueue = []
class RootUnslicer(BaseUnslicer):
# topRegistries is used for top-level objects
topRegistries = [UnslicerRegistry, BananaUnslicerRegistry]
# openRegistries is used for everything at lower levels
openRegistries = [UnslicerRegistry]
constraint = None
openCount = None
def __init__(self):
self.objects = {}
keys = []
for r in self.topRegistries + self.openRegistries:
for k in r.keys():
keys.append(len(k[0]))
self.maxIndexLength = reduce(max, keys)
def start(self, count):
pass
def setConstraint(self, constraint):
# this constraints top-level objects. E.g., if this is an
# IntegerConstraint, then only integers will be accepted.
self.constraint = constraint
def checkToken(self, typebyte, size):
if self.constraint:
self.constraint.checkToken(typebyte, size)
def openerCheckToken(self, typebyte, size, opentype):
if typebyte == tokens.STRING:
if size > self.maxIndexLength:
why = "STRING token is too long, %d>%d" % \
(size, self.maxIndexLength)
raise Violation(why)
elif typebyte == tokens.VOCAB:
return
else:
# TODO: hack for testing
raise Violation("index token 0x%02x not STRING or VOCAB" % \
ord(typebyte))
raise BananaError("index token 0x%02x not STRING or VOCAB" % \
ord(typebyte))
def open(self, opentype):
# called (by delegation) by the top Unslicer on the stack, regardless
# of what kind of unslicer it is. This is only used for "internal"
# objects: non-top-level nodes
assert len(self.protocol.receiveStack) > 1
for reg in self.openRegistries:
opener = reg.get(opentype)
if opener is not None:
child = opener()
return child
else:
raise Violation("unknown OPEN type %s" % (opentype,))
def doOpen(self, opentype):
# this is only called for top-level objects
assert len(self.protocol.receiveStack) == 1
if self.constraint:
self.constraint.checkOpentype(opentype)
for reg in self.topRegistries:
opener = reg.get(opentype)
if opener is not None:
child = opener()
break
else:
raise Violation("unknown top-level OPEN type %s" % (opentype,))
if self.constraint:
child.setConstraint(self.constraint)
return child
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.protocol.debugReceive:
print "RootUnslicer.receiveChild(%s)" % (obj,)
self.objects = {}
if obj in (ReplaceVocabularyTable, AddToVocabularyTable):
# the unslicer has already changed the vocab table
return
if self.protocol.exploded:
print "protocol exploded, can't deliver object"
print self.protocol.exploded
self.protocol.receivedObject(self.protocol.exploded)
return
self.protocol.receivedObject(obj) # give finished object to Banana
def receiveClose(self):
raise BananaError("top-level should never receive CLOSE tokens")
def reportViolation(self, why):
return self.protocol.reportViolation(why)
def describe(self):
return "<RootUnslicer>"
def setObject(self, counter, obj):
pass
def getObject(self, counter):
return None

View File

@ -0,0 +1,114 @@
# -*- test-case-name: foolscap.test.test_banana -*-
import sets
from foolscap.slicers.list import ListSlicer, ListUnslicer
from foolscap.tokens import Violation
from foolscap.constraint import OpenerConstraint, UnboundedSchema, Any, \
IConstraint
class SetSlicer(ListSlicer):
opentype = ("set",)
trackReferences = True
slices = sets.Set
def sliceBody(self, streamable, banana):
for i in self.obj:
yield i
class ImmutableSetSlicer(SetSlicer):
opentype = ("immutable-set",)
trackReferences = False
slices = sets.ImmutableSet
have_builtin_set = False
try:
set
# python2.4 has a builtin 'set' type, which is mutable
have_builtin_set = True
class BuiltinSetSlicer(SetSlicer):
slices = set
class BuiltinFrozenSetSlicer(ImmutableSetSlicer):
slices = frozenset
except NameError:
# oh well, I guess we don't have 'set'
pass
class SetUnslicer(ListUnslicer):
opentype = ("set",)
def receiveClose(self):
return sets.Set(self.list), None
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, SetConstraint)
self.maxLength = constraint.maxLength
self.itemConstraint = constraint.constraint
class ImmutableSetUnslicer(SetUnslicer):
opentype = ("immutable-set",)
def receiveClose(self):
return sets.ImmutableSet(self.list), None
class SetConstraint(OpenerConstraint):
"""The object must be a Set of some sort, with a given maximum size. To
accept sets of any size, use maxLength=None. All member objects must obey
the given constraint. By default this will accept both mutable and
immutable sets, if you want to require a particular type, set mutable= to
either True or False.
"""
# TODO: if mutable!=None, we won't throw out the wrong set type soon
# enough. We need to override checkOpenType to accomplish this.
opentypes = [("set",), ("immutable-set",)]
name = "SetConstraint"
if have_builtin_set:
mutable_set_types = (set, sets.Set)
immutable_set_types = (frozenset, sets.ImmutableSet)
else:
mutable_set_types = (sets.Set,)
immutable_set_types = (sets.ImmutableSet,)
all_set_types = mutable_set_types + immutable_set_types
def __init__(self, constraint, maxLength=30, mutable=None):
self.constraint = IConstraint(constraint)
self.maxLength = maxLength
self.mutable = mutable
def checkObject(self, obj, inbound):
if not isinstance(obj, self.all_set_types):
raise Violation("not a set")
if (self.mutable == True and
not isinstance(obj, self.mutable_set_types)):
raise Violation("obj is a set, but not a mutable one")
if (self.mutable == False and
not isinstance(obj, self.immutable_set_types)):
raise Violation("obj is a set, but not an immutable one")
if self.maxLength is not None and len(obj) > self.maxLength:
raise Violation("set is too large")
if self.constraint:
for o in obj:
self.constraint.checkObject(o, inbound)
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
if self.maxLength == None:
raise UnboundedSchema
if not self.constraint:
raise UnboundedSchema
return (self.OPENBYTES("immutable-set") +
self.maxLength * self.constraint.maxSize(seen))
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
if not self.constraint:
raise UnboundedSchema
seen.append(self)
return 1 + self.constraint.maxDepth(seen)

View File

@ -0,0 +1,133 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.internet.defer import Deferred
from foolscap.tokens import Violation
from foolscap.slicer import BaseUnslicer
from foolscap.slicers.list import ListSlicer
from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint
class TupleSlicer(ListSlicer):
opentype = ("tuple",)
slices = tuple
class TupleUnslicer(BaseUnslicer):
opentype = ("tuple",)
debug = False
constraints = None
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, TupleConstraint)
self.constraints = constraint.constraints
def start(self, count):
self.list = []
# indices of .list which are unfilled because of children that could
# not yet be referenced
self.num_unreferenceable_children = 0
self.count = count
if self.debug:
print "%s[%d].start with %s" % (self, self.count, self.list)
self.finished = False
self.deferred = Deferred()
self.protocol.setObject(count, self.deferred)
def checkToken(self, typebyte, size):
if self.constraints == None:
return
if len(self.list) >= len(self.constraints):
raise Violation("the tuple is full")
self.constraints[len(self.list)].checkToken(typebyte, size)
def doOpen(self, opentype):
where = len(self.list)
if self.constraints != None:
if where >= len(self.constraints):
raise Violation("the tuple is full")
self.constraints[where].checkOpentype(opentype)
unslicer = self.open(opentype)
if unslicer:
if self.constraints != None:
unslicer.setConstraint(self.constraints[where])
return unslicer
def update(self, obj, index):
if self.debug:
print "%s[%d].update: [%d]=%s" % (self, self.count, index, obj)
self.list[index] = obj
self.num_unreferenceable_children -= 1
if self.finished:
self.checkComplete()
return obj
def receiveChild(self, obj, ready_deferred=None):
assert ready_deferred is None
if isinstance(obj, Deferred):
obj.addCallback(self.update, len(self.list))
obj.addErrback(self.explode)
self.num_unreferenceable_children += 1
self.list.append("placeholder")
else:
self.list.append(obj)
def checkComplete(self):
if self.debug:
print "%s[%d].checkComplete: %d pending" % \
(self, self.count, self.num_unreferenceable_children)
if self.num_unreferenceable_children:
# not finished yet, we'll fire our Deferred when we are
if self.debug:
print " not finished yet"
return self.deferred, None
# list is now complete. We can finish.
t = tuple(self.list)
if self.debug:
print " finished! tuple:%s{%s}" % (t, id(t))
self.protocol.setObject(self.count, t)
self.deferred.callback(t)
return t, None
def receiveClose(self):
if self.debug:
print "%s[%d].receiveClose" % (self, self.count)
self.finished = 1
return self.checkComplete()
def describe(self):
return "[%d]" % len(self.list)
class TupleConstraint(OpenerConstraint):
opentypes = [("tuple",)]
name = "TupleConstraint"
def __init__(self, *elemConstraints):
self.constraints = [IConstraint(e) for e in elemConstraints]
def checkObject(self, obj, inbound):
if not isinstance(obj, tuple):
raise Violation("not a tuple")
if len(obj) != len(self.constraints):
raise Violation("wrong size tuple")
for i in range(len(self.constraints)):
self.constraints[i].checkObject(obj[i], inbound)
def maxSize(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
total = self.OPENBYTES("tuple")
for c in self.constraints:
total += c.maxSize(seen[:])
return total
def maxDepth(self, seen=None):
if not seen: seen = []
if self in seen:
raise UnboundedSchema # recursion
seen.append(self)
return 1 + reduce(max, [c.maxDepth(seen[:])
for c in self.constraints])

View File

@ -0,0 +1,42 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.internet.defer import Deferred
from foolscap.constraint import Any, StringConstraint
from foolscap.tokens import BananaError, STRING
from foolscap.slicer import BaseSlicer, LeafUnslicer
class UnicodeSlicer(BaseSlicer):
opentype = ("unicode",)
slices = unicode
def sliceBody(self, streamable, banana):
yield self.obj.encode("UTF-8")
class UnicodeUnslicer(LeafUnslicer):
# accept a UTF-8 encoded string
opentype = ("unicode",)
string = None
constraint = None
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, StringConstraint)
self.constraint = constraint
def checkToken(self, typebyte, size):
if typebyte != STRING:
raise BananaError("UnicodeUnslicer only accepts strings")
if self.constraint:
self.constraint.checkToken(typebyte, size)
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.string != None:
raise BananaError("already received a string")
self.string = unicode(obj, "UTF-8")
def receiveClose(self):
return self.string, None
def describe(self):
return "<unicode>"

View File

@ -0,0 +1,184 @@
# -*- test-case-name: foolscap.test.test_banana -*-
from twisted.internet.defer import Deferred
from foolscap.constraint import Any, StringConstraint
from foolscap.tokens import Violation, BananaError, INT, STRING
from foolscap.slicer import BaseSlicer, BaseUnslicer, LeafUnslicer
from foolscap.slicer import BananaUnslicerRegistry
class ReplaceVocabularyTable:
pass
class AddToVocabularyTable:
pass
class ReplaceVocabSlicer(BaseSlicer):
# this works somewhat like a dictionary
opentype = ('set-vocab',)
trackReferences = False
def slice(self, streamable, banana):
# we need to implement slice() (instead of merely sliceBody) so we
# can get control at the beginning and end of serialization. It also
# gives us access to the Banana protocol object, so we can manipulate
# their outgoingVocabulary table.
self.streamable = streamable
self.start(banana)
for o in self.opentype:
yield o
# the vocabDict maps strings to index numbers. The far end needs the
# opposite mapping, from index numbers to strings. We perform the
# flip here at the sending end.
stringToIndex = self.obj
indexToString = dict([(stringToIndex[s],s) for s in stringToIndex])
assert len(stringToIndex) == len(indexToString) # catch duplicates
indices = indexToString.keys()
indices.sort()
for index in indices:
string = indexToString[index]
yield index
yield string
self.finish(banana)
def start(self, banana):
# this marks the transition point between the old vocabulary dict and
# the new one, so now is the time we should empty the dict.
banana.outgoingVocabTableWasReplaced({})
def finish(self, banana):
# now we replace the vocab dict
banana.outgoingVocabTableWasReplaced(self.obj)
class ReplaceVocabUnslicer(LeafUnslicer):
"""Much like DictUnslicer, but keys must be numbers, and values must be
strings. This is used to set the entire vocab table at once. To add
individual tokens, use AddVocabUnslicer by sending an (add-vocab num
string) sequence."""
opentype = ('set-vocab',)
unslicerRegistry = BananaUnslicerRegistry
maxKeys = None
valueConstraint = StringConstraint(100)
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, StringConstraint)
self.valueConstraint = constraint
def start(self, count):
self.d = {}
self.key = None
def checkToken(self, typebyte, size):
if self.maxKeys is not None and len(self.d) >= self.maxKeys:
raise Violation("the table is full")
if self.key is None:
if typebyte != INT:
raise BananaError("VocabUnslicer only accepts INT keys")
else:
if typebyte != STRING:
raise BananaError("VocabUnslicer only accepts STRING values")
if self.valueConstraint:
self.valueConstraint.checkToken(typebyte, size)
def receiveChild(self, token, ready_deferred=None):
assert not isinstance(token, Deferred)
assert ready_deferred is None
if self.key is None:
if self.d.has_key(token):
raise BananaError("duplicate key '%s'" % token)
self.key = token
else:
self.d[self.key] = token
self.key = None
def receiveClose(self):
if self.key is not None:
raise BananaError("sequence ended early: got key but not value")
# now is the time we replace our protocol's vocab table
self.protocol.replaceIncomingVocabulary(self.d)
return ReplaceVocabularyTable, None
def describe(self):
if self.key is not None:
return "<vocabdict>[%s]" % self.key
else:
return "<vocabdict>"
class AddVocabSlicer(BaseSlicer):
opentype = ('add-vocab',)
trackReferences = False
def __init__(self, value):
assert isinstance(value, str)
self.value = value
def slice(self, streamable, banana):
# we need to implement slice() (instead of merely sliceBody) so we
# can get control at the beginning and end of serialization. It also
# gives us access to the Banana protocol object, so we can manipulate
# their outgoingVocabulary table.
self.streamable = streamable
self.start(banana)
for o in self.opentype:
yield o
yield self.index
yield self.value
self.finish(banana)
def start(self, banana):
# this marks the transition point between the old vocabulary dict and
# the new one, so now is the time we should decide upon the key. It
# is important that we *do not* add it to the dict yet, otherwise
# we'll send (add-vocab NN [VOCAB#NN]), which is kind of pointless.
index = banana.allocateEntryInOutgoingVocabTable(self.value)
self.index = index
def finish(self, banana):
banana.outgoingVocabTableWasAmended(self.index, self.value)
class AddVocabUnslicer(BaseUnslicer):
# (add-vocab num string): self.vocab[num] = string
opentype = ('add-vocab',)
unslicerRegistry = BananaUnslicerRegistry
index = None
value = None
valueConstraint = StringConstraint(100)
def setConstraint(self, constraint):
if isinstance(constraint, Any):
return
assert isinstance(constraint, StringConstraint)
self.valueConstraint = constraint
def checkToken(self, typebyte, size):
if self.index is None:
if typebyte != INT:
raise BananaError("Vocab key must be an INT")
elif self.value is None:
if typebyte != STRING:
raise BananaError("Vocab value must be a STRING")
if self.valueConstraint:
self.valueConstraint.checkToken(typebyte, size)
else:
raise Violation("add-vocab only accepts two values")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.index is None:
self.index = obj
else:
self.value = obj
def receiveClose(self):
if self.index is None or self.value is None:
raise BananaError("sequence ended too early")
self.protocol.addIncomingVocabulary(self.index, self.value)
return AddToVocabularyTable, None
def describe(self):
if self.index is not None:
return "<add-vocab>[%d]" % self.index
return "<add-vocab>"

View File

@ -0,0 +1,674 @@
# Copyright 2005 Divmod, Inc. See LICENSE file for details
import itertools, md5
from OpenSSL import SSL, crypto
from twisted.python import reflect
from twisted.internet.defer import Deferred
# Private - shared between all ServerContextFactories, counts up to
# provide a unique session id for each context
_sessionCounter = itertools.count().next
class _SSLApplicationData(object):
def __init__(self):
self.problems = []
class VerifyError(Exception):
"""Could not verify something that was supposed to be signed.
"""
class PeerVerifyError(VerifyError):
"""The peer rejected our verify error.
"""
class OpenSSLVerifyError(VerifyError):
_errorCodes = {0: ('X509_V_OK', 'ok', 'the operation was successful. >'),
2: ('X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT',
'unable to get issuer certificate',
"the issuer certificate could not be found', 'this occurs if the issuer certificate of an untrusted certificate cannot be found."),
3: ('X509_V_ERR_UNABLE_TO_GET_CRL',
' unable to get certificate CRL',
'the CRL of a certificate could not be found. Unused.'),
4: ('X509_V_ERR_UNABLE_TO_DECRYPT_CERT_SIGNATURE',
"unable to decrypt certificate's signature",
'the certificate signature could not be decrypted. This means that the actual signature value could not be determined rather than it not match- ing the expected value, this is only meaningful for RSA keys.'),
5: ('X509_V_ERR_UNABLE_TO_DECRYPT_CRL_SIGNATURE',
"unable to decrypt CRL's signature",
"the CRL signature could not be decrypted', 'this means that the actual signature value could not be determined rather than it not matching the expected value. Unused."),
6: ('X509_V_ERR_UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY',
'unable to decode issuer',
'public key the public key in the certificate SubjectPublicKeyInfo could not be read.'),
7: ('X509_V_ERR_CERT_SIGNATURE_FAILURE',
'certificate signature failure',
'the signature of the certificate is invalid.'),
8: ('X509_V_ERR_CRL_SIGNATURE_FAILURE',
'CRL signature failure',
'the signature of the certificate is invalid. Unused.'),
9: ('X509_V_ERR_CERT_NOT_YET_VALID',
'certificate is not yet valid',
"the certificate is not yet valid', 'the notBefore date is after the cur- rent time."),
10: ('X509_V_ERR_CERT_HAS_EXPIRED',
'certificate has expired',
"the certificate has expired', 'that is the notAfter date is before the current time."),
11: ('X509_V_ERR_CRL_NOT_YET_VALID',
'CRL is not yet valid',
'the CRL is not yet valid. Unused.'),
12: ('X509_V_ERR_CRL_HAS_EXPIRED',
'CRL has expired',
'the CRL has expired. Unused.'),
13: ('X509_V_ERR_ERROR_IN_CERT_NOT_BEFORE_FIELD',
"format error in certificate's",
'notBefore field the certificate notBefore field contains an invalid time.'),
14: ('X509_V_ERR_ERROR_IN_CERT_NOT_AFTER_FIELD',
"format error in certificate's",
'notAfter field the certificate notAfter field contains an invalid time.'),
15: ('X509_V_ERR_ERROR_IN_CRL_LAST_UPDATE_FIELD',
"format error in CRL's lastUpdate field",
'the CRL lastUpdate field contains an invalid time. Unused.'),
16: ('X509_V_ERR_ERROR_IN_CRL_NEXT_UPDATE_FIELD',
"format error in CRL's nextUpdate field",
'the CRL nextUpdate field contains an invalid time. Unused.'),
17: ('X509_V_ERR_OUT_OF_MEM',
'out of memory',
'an error occurred trying to allocate memory. This should never happen.'),
18: ('X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT',
'self signed certificate',
'the passed certificate is self signed and the same certificate cannot be found in the list of trusted certificates.'),
19: ('X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN',
'self signed certificate in certificate chain',
'the certificate chain could be built up using the untrusted certificates but the root could not be found locally.'),
20: ('X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY',
'unable to get local issuer',
'certificate the issuer certificate of a locally looked up certificate could not be found. This normally means the list of trusted certificates is not com- plete.'),
21: ('X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE',
'unable to verify the first',
'certificate no signatures could be verified because the chain contains only one cer- tificate and it is not self signed.'),
22: ('X509_V_ERR_CERT_CHAIN_TOO_LONG',
'certificate chain too long',
'the certificate chain length is greater than the supplied maximum depth. Unused.'),
23: ('X509_V_ERR_CERT_REVOKED',
'certificate revoked',
'the certificate has been revoked. Unused.'),
24: ('X509_V_ERR_INVALID_CA',
'invalid CA certificate',
'a CA certificate is invalid. Either it is not a CA or its extensions are not consistent with the supplied purpose.'),
25: ('X509_V_ERR_PATH_LENGTH_EXCEEDED',
'path length constraint exceeded',
'the basicConstraints pathlength parameter has been exceeded.'),
26: ('X509_V_ERR_INVALID_PURPOSE',
'unsupported certificate purpose',
'the supplied certificate cannot be used for the specified purpose.'),
27: ('X509_V_ERR_CERT_UNTRUSTED',
'certificate not trusted',
'the root CA is not marked as trusted for the specified purpose.'),
28: ('X509_V_ERR_CERT_REJECTED',
'certificate rejected',
'the root CA is marked to reject the specified purpose.'),
29: ('X509_V_ERR_SUBJECT_ISSUER_MISMATCH',
'subject issuer mismatch',
'the current candidate issuer certificate was rejected because its sub- ject name did not match the issuer name of the current certificate. Only displayed when the -issuer_checks option is set.'),
30: ('X509_V_ERR_AKID_SKID_MISMATCH',
'authority and subject key identifier mismatch',
'the current candidate issuer certificate was rejected because its sub- ject key identifier was present and did not match the authority key identifier current certificate. Only displayed when the -issuer_checks option is set.'),
31: ('X509_V_ERR_AKID_ISSUER_SERIAL_MISMATCH',
'authority and issuer serial number mismatch',
'the current candidate issuer certificate was rejected because its issuer name and serial number was present and did not match the authority key identifier of the current certificate. Only displayed when the -issuer_checks option is set.'),
32: ('X509_V_ERR_KEYUSAGE_NO_CERTSIGN',
'key usage does not include certificate',
'signing the current candidate issuer certificate was rejected because its keyUsage extension does not permit certificate signing.'),
50: ('X509_V_ERR_APPLICATION_VERIFICATION',
'application verification failure',
'an application specific error. Unused.')}
def __init__(self, cert, errno, depth):
VerifyError.__init__(self, cert, errno, depth)
self.cert = cert
self.errno = errno
self.depth = depth
def __repr__(self):
x = self._errorCodes.get(self.errno)
if x is not None:
name, short, long = x
return 'Peer Certificate Verification Failed: %s (error code: %d)' % (
long, self.errno
)
__str__ = __repr__
_x509namecrap = [
['CN', 'commonName'],
['O', 'organizationName'],
['OU', 'organizationalUnitName'],
['L', 'localityName'],
['ST', 'stateOrProvinceName'],
['C', 'countryName'],
[ 'emailAddress']]
_x509names = {}
for abbrevs in _x509namecrap:
for abbrev in abbrevs:
_x509names[abbrev] = abbrevs[0]
class DistinguishedName(dict):
__slots__ = ()
def __init__(self, **kw):
for k, v in kw.iteritems():
setattr(self, k, v)
def _copyFrom(self, x509name):
d = {}
for name in _x509names:
value = getattr(x509name, name, None)
if value is not None:
setattr(self, name, value)
def _copyInto(self, x509name):
for k, v in self.iteritems():
setattr(x509name, k, v)
def __repr__(self):
return '<DN %s>' % (dict.__repr__(self)[1:-1])
def __getattr__(self, attr):
return self[_x509names[attr]]
def __setattr__(self, attr, value):
assert type(attr) is str
if not attr in _x509names:
raise AttributeError("%s is not a valid OpenSSL X509 name field" % (attr,))
realAttr = _x509names[attr]
value = value.encode('ascii')
assert type(value) is str
self[realAttr] = value
def inspect(self):
l = []
from formless.annotate import nameToLabel
lablen = 0
for kp in _x509namecrap:
k = kp[-1]
label = nameToLabel(k)
lablen = max(len(label), lablen)
l.append((label, getattr(self, k)))
lablen += 2
for n, (label, attr) in enumerate(l):
l[n] = (label.rjust(lablen)+': '+ attr)
return '\n'.join(l)
DN = DistinguishedName
class CertBase:
def __init__(self, original):
self.original = original
def _copyName(self, suffix):
dn = DistinguishedName()
dn._copyFrom(getattr(self.original, 'get_'+suffix)())
return dn
def getSubject(self):
return self._copyName('subject')
def problemsFromTransport(tpt):
"""Return a list of L{OpenSSLVerifyError}s given a Twisted transport object.
"""
return tpt.getHandle().get_context().get_app_data().problems
class Certificate(CertBase):
def __repr__(self):
return '<%s Subject=%s Issuer=%s>' % (self.__class__.__name__,
self.getSubject().commonName,
self.getIssuer().commonName)
def __eq__(self, other):
if isinstance(other, Certificate):
return self.dump() == other.dump()
return False
def __ne__(self, other):
return not self.__eq__(other)
def load(Class, requestData, format=crypto.FILETYPE_ASN1, args=()):
return Class(crypto.load_certificate(format, requestData), *args)
load = classmethod(load)
_load = load
def dumpPEM(self):
"""Dump both public and private parts of a private certificate to PEM-format
data
"""
return self.dump(crypto.FILETYPE_PEM)
def loadPEM(Class, data):
"""Load both private and public parts of a private certificate from a chunk of
PEM-format data.
"""
return Class.load(data, crypto.FILETYPE_PEM)
loadPEM = classmethod(loadPEM)
def peerFromTransport(Class, transport):
return Class(transport.getHandle().get_peer_certificate())
peerFromTransport = classmethod(peerFromTransport)
def hostFromTransport(Class, transport):
return Class(transport.getHandle().get_host_certificate())
hostFromTransport = classmethod(hostFromTransport)
def getPublicKey(self):
return PublicKey(self.original.get_pubkey())
def dump(self, format=crypto.FILETYPE_ASN1):
return crypto.dump_certificate(format, self.original)
def serialNumber(self):
return self.original.get_serial_number()
def digest(self, method='md5'):
return self.original.digest(method)
def _inspect(self):
return '\n'.join(['Certificate For Subject:',
self.getSubject().inspect(),
'\nIssuer:',
self.getIssuer().inspect(),
'\nSerial Number: %d' % self.serialNumber(),
'Digest: %s' % self.digest()])
def inspect(self):
return '\n'.join(self._inspect(), self.getPublicKey().inspect())
def getIssuer(self):
return self._copyName('issuer')
def options(self, *authorities):
raise NotImplementedError('Possible, but doubtful we need this yet')
class CertificateRequest(CertBase):
def load(Class, requestData, requestFormat=crypto.FILETYPE_ASN1):
req = crypto.load_certificate_request(requestFormat, requestData)
dn = DistinguishedName()
dn._copyFrom(req.get_subject())
if not req.verify(req.get_pubkey()):
raise VerifyError("Can't verify that request for %r is self-signed." % (dn,))
return Class(req)
load = classmethod(load)
def dump(self, format=crypto.FILETYPE_ASN1):
return crypto.dump_certificate_request(format, self.original)
class PrivateCertificate(Certificate):
def __repr__(self):
return Certificate.__repr__(self) + ' with ' + repr(self.privateKey)
def _setPrivateKey(self, privateKey):
if not privateKey.matches(self.getPublicKey()):
raise VerifyError(
"Sanity check failed: Your certificate was not properly signed.")
self.privateKey = privateKey
return self
def newCertificate(self, newCertData, format=crypto.FILETYPE_ASN1):
return self.load(newCertData, self.privateKey, format)
def load(Class, data, privateKey, format=crypto.FILETYPE_ASN1):
return Class._load(data, format)._setPrivateKey(privateKey)
load = classmethod(load)
def inspect(self):
return '\n'.join([Certificate._inspect(self),
self.privateKey.inspect()])
def dumpPEM(self):
"""Dump both public and private parts of a private certificate to PEM-format
data
"""
return self.dump(crypto.FILETYPE_PEM) + self.privateKey.dump(crypto.FILETYPE_PEM)
def loadPEM(Class, data):
"""Load both private and public parts of a private certificate from a chunk of
PEM-format data.
"""
return Class.load(data, KeyPair.load(data, crypto.FILETYPE_PEM),
crypto.FILETYPE_PEM)
loadPEM = classmethod(loadPEM)
def fromCertificateAndKeyPair(Class, certificateInstance, privateKey):
privcert = Class(certificateInstance.original)
return privcert._setPrivateKey(privateKey)
fromCertificateAndKeyPair = classmethod(fromCertificateAndKeyPair)
def options(self, *authorities):
options = dict(privateKey=self.privateKey.original,
certificate=self.original)
if authorities:
options.update(dict(verify=True,
requireCertificate=True,
caCerts=[auth.original for auth in authorities]))
return OpenSSLCertificateOptions(**options)
def certificateRequest(self, format=crypto.FILETYPE_ASN1,
digestAlgorithm='md5'):
return self.privateKey.certificateRequest(
self.getSubject(),
format,
digestAlgorithm)
def signCertificateRequest(self,
requestData,
verifyDNCallback,
serialNumber,
requestFormat=crypto.FILETYPE_ASN1,
certificateFormat=crypto.FILETYPE_ASN1):
issuer = self.getSubject()
return self.privateKey.signCertificateRequest(
issuer,
requestData,
verifyDNCallback,
serialNumber,
requestFormat,
certificateFormat)
def signRequestObject(self, certificateRequest, serialNumber,
secondsToExpiry=60 * 60 * 24 * 365, # One year
digestAlgorithm='md5'):
return self.privateKey.signRequestObject(self.getSubject(),
certificateRequest,
serialNumber,
secondsToExpiry,
digestAlgorithm)
class PublicKey:
def __init__(self, osslpkey):
self.original = osslpkey
req1 = crypto.X509Req()
req1.set_pubkey(osslpkey)
self._emptyReq = crypto.dump_certificate_request(crypto.FILETYPE_ASN1, req1)
def matches(self, otherKey):
return self._emptyReq == otherKey._emptyReq
# O OG OMG OMFG PYOPENSSL SUCKS SO BAD
# def verifyCertificate(self, certificate):
# """returns None, or raises a VerifyError exception if the certificate could not
# be verified.
# """
# if not certificate.original.verify(self.original):
# raise VerifyError("We didn't sign that certificate.")
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, self.keyHash())
def keyHash(self):
"""MD5 hex digest of signature on an empty certificate request with this key.
"""
return md5.md5(self._emptyReq).hexdigest()
def inspect(self):
return 'Public Key with Hash: %s' % (self.keyHash(),)
class KeyPair(PublicKey):
def load(Class, data, format=crypto.FILETYPE_ASN1):
return Class(crypto.load_privatekey(format, data))
load = classmethod(load)
def dump(self, format=crypto.FILETYPE_ASN1):
return crypto.dump_privatekey(format, self.original)
def __getstate__(self):
return self.dump()
def __setstate__(self, state):
self.__init__(crypto.load_privatekey(crypto.FILETYPE_ASN1, state))
def inspect(self):
t = self.original.type()
if t == crypto.TYPE_RSA:
ts = 'RSA'
elif t == crypto.TYPE_DSA:
ts = 'DSA'
else:
ts = '(Unknown Type!)'
L = (self.original.bits(), ts, self.keyHash())
return '%s-bit %s Key Pair with Hash: %s' % L
def generate(Class, kind=crypto.TYPE_RSA, size=1024):
pkey = crypto.PKey()
pkey.generate_key(kind, size)
return Class(pkey)
def newCertificate(self, newCertData, format=crypto.FILETYPE_ASN1):
return PrivateCertificate.load(newCertData, self, format)
generate = classmethod(generate)
def requestObject(self, distinguishedName, digestAlgorithm='md5'):
req = crypto.X509Req()
req.set_pubkey(self.original)
distinguishedName._copyInto(req.get_subject())
req.sign(self.original, digestAlgorithm)
return CertificateRequest(req)
def certificateRequest(self, distinguishedName,
format=crypto.FILETYPE_ASN1,
digestAlgorithm='md5'):
"""Create a certificate request signed with this key.
@return: a string, formatted according to the 'format' argument.
"""
return self.requestObject(distinguishedName, digestAlgorithm).dump(format)
def signCertificateRequest(self,
issuerDistinguishedName,
requestData,
verifyDNCallback,
serialNumber,
requestFormat=crypto.FILETYPE_ASN1,
certificateFormat=crypto.FILETYPE_ASN1,
secondsToExpiry=60 * 60 * 24 * 365, # One year
digestAlgorithm='md5'):
"""Given a blob of certificate request data and a certificate authority's
DistinguishedName, return a blob of signed certificate data.
If verifyDNCallback returns a Deferred, I will return a Deferred which
fires the data when that Deferred has completed.
"""
hlreq = CertificateRequest.load(requestData, requestFormat)
dn = hlreq.getSubject()
vval = verifyDNCallback(dn)
def verified(value):
if not value:
raise VerifyError("DN callback %r rejected request DN %r" % (verifyDNCallback, dn))
return self.signRequestObject(issuerDistinguishedName, hlreq,
serialNumber, secondsToExpiry, digestAlgorithm).dump(certificateFormat)
if isinstance(vval, Deferred):
return vval.addCallback(verified)
else:
return verified(vval)
def signRequestObject(self,
issuerDistinguishedName,
requestObject,
serialNumber,
secondsToExpiry=60 * 60 * 24 * 365, # One year
digestAlgorithm='md5'):
"""
Sign a CertificateRequest instance, returning a Certificate instance.
"""
req = requestObject.original
dn = requestObject.getSubject()
cert = crypto.X509()
issuerDistinguishedName._copyInto(cert.get_issuer())
cert.set_subject(req.get_subject())
cert.set_pubkey(req.get_pubkey())
cert.gmtime_adj_notBefore(0)
cert.gmtime_adj_notAfter(secondsToExpiry)
cert.set_serial_number(serialNumber)
cert.sign(self.original, digestAlgorithm)
return Certificate(cert)
def selfSignedCert(self, serialNumber, **kw):
dn = DN(**kw)
return PrivateCertificate.fromCertificateAndKeyPair(
self.signRequestObject(dn, self.requestObject(dn), serialNumber),
self)
class OpenSSLCertificateOptions(object):
"""A factory for SSL context objects, for server SSL connections.
"""
_context = None
# Older versions of PyOpenSSL didn't provide OP_ALL. Fudge it here, just in case.
_OP_ALL = getattr(SSL, 'OP_ALL', 0x0000FFFF)
method = SSL.TLSv1_METHOD
def __init__(self,
privateKey=None,
certificate=None,
method=None,
verify=False,
caCerts=None,
verifyDepth=9,
requireCertificate=True,
verifyOnce=True,
enableSingleUseKeys=True,
enableSessions=True,
fixBrokenPeers=False):
"""
Create an OpenSSL context SSL connection context factory.
@param privateKey: A PKey object holding the private key.
@param certificate: An X509 object holding the certificate.
@param method: The SSL protocol to use, one of SSLv23_METHOD,
SSLv2_METHOD, SSLv3_METHOD, TLSv1_METHOD. Defaults to TLSv1_METHOD.
@param verify: If True, verify certificates received from the peer and
fail the handshake if verification fails. Otherwise, allow anonymous
sessions and sessions with certificates which fail validation. By
default this is False.
@param caCerts: List of certificate authority certificates to
send to the client when requesting a certificate. Only used if verify
is True, and if verify is True, either this must be specified or
caCertsFile must be given. Since verify is False by default,
this is None by default.
@param verifyDepth: Depth in certificate chain down to which to verify.
If unspecified, use the underlying default (9).
@param requireCertificate: If True, do not allow anonymous sessions.
@param verifyOnce: If True, do not re-verify the certificate
on session resumption.
@param enableSingleUseKeys: If True, generate a new key whenever
ephemeral DH parameters are used to prevent small subgroup attacks.
@param enableSessions: If True, set a session ID on each context. This
allows a shortened handshake to be used when a known client reconnects.
@param fixBrokenPeers: If True, enable various non-spec protocol fixes
for broken SSL implementations. This should be entirely safe,
according to the OpenSSL documentation, but YMMV. This option is now
off by default, because it causes problems with connections between
peers using OpenSSL 0.9.8a.
"""
assert (privateKey is None) == (certificate is None), "Specify neither or both of privateKey and certificate"
self.privateKey = privateKey
self.certificate = certificate
if method is not None:
self.method = method
self.verify = verify
assert ((verify and caCerts) or
(not verify)), "Specify client CA certificate information if and only if enabling certificate verification"
self.caCerts = caCerts
self.verifyDepth = verifyDepth
self.requireCertificate = requireCertificate
self.verifyOnce = verifyOnce
self.enableSingleUseKeys = enableSingleUseKeys
self.enableSessions = enableSessions
self.fixBrokenPeers = fixBrokenPeers
def __getstate__(self):
d = super(OpenSSLCertificateOptions, self).__getstate__()
try:
del d['context']
except KeyError:
pass
return d
def getContext(self):
"""Return a SSL.Context object.
"""
if self._context is None:
self._context = self._makeContext()
return self._context
def _makeContext(self):
ctx = SSL.Context(self.method)
ctx.set_app_data(_SSLApplicationData())
if self.certificate is not None and self.privateKey is not None:
ctx.use_certificate(self.certificate)
ctx.use_privatekey(self.privateKey)
# Sanity check
ctx.check_privatekey()
verifyFlags = SSL.VERIFY_NONE
if self.verify:
verifyFlags = SSL.VERIFY_PEER
if self.requireCertificate:
verifyFlags |= SSL.VERIFY_FAIL_IF_NO_PEER_CERT
if self.verifyOnce:
verifyFlags |= SSL.VERIFY_CLIENT_ONCE
if self.caCerts:
store = ctx.get_cert_store()
for cert in self.caCerts:
store.add_cert(cert)
def _trackVerificationProblems(conn,cert,errno,depth,preverify_ok):
# retcode is the answer OpenSSL's default verifier would have
# given, had we allowed it to run.
if not preverify_ok:
ctx.get_app_data().problems.append(OpenSSLVerifyError(cert, errno, depth))
return preverify_ok
ctx.set_verify(verifyFlags, _trackVerificationProblems)
if self.verifyDepth is not None:
ctx.set_verify_depth(self.verifyDepth)
if self.enableSingleUseKeys:
ctx.set_options(SSL.OP_SINGLE_DH_USE)
if self.fixBrokenPeers:
ctx.set_options(self._OP_ALL)
if self.enableSessions:
sessionName = md5.md5("%s-%d" % (reflect.qual(self.__class__), _sessionCounter())).hexdigest()
ctx.set_session_id(sessionName)
return ctx

View File

@ -0,0 +1,419 @@
"""
storage.py: support for using Banana as if it were pickle
This includes functions for serializing to and from strings, instead of a
network socket. It also has support for serializing 'unsafe' objects,
specifically classes, modules, functions, and instances of arbitrary classes.
These are 'unsafe' because to recreate the object on the deserializing end,
we must be willing to execute code of the sender's choosing (i.e. the
constructor of whatever package.module.class names they send us). It is
unwise to do this unless you are willing to allow your internal state to be
compromised by the author of the serialized data you're unpacking.
This functionality is isolated here because it is never used for data coming
over network connections.
"""
from cStringIO import StringIO
import types
from new import instance, instancemethod
from pickle import whichmodule # used by FunctionSlicer
from foolscap import slicer, banana, tokens
from foolscap.tokens import BananaError
from twisted.internet.defer import Deferred
from twisted.python import reflect
from foolscap.slicers.dict import OrderedDictSlicer
from foolscap.slicers.root import RootSlicer, RootUnslicer
################## Slicers for "unsafe" things
# Extended types, not generally safe. The UnsafeRootSlicer checks for these
# with a separate table.
def getInstanceState(inst):
"""Utility function to default to 'normal' state rules in serialization.
"""
if hasattr(inst, "__getstate__"):
state = inst.__getstate__()
else:
state = inst.__dict__
return state
class InstanceSlicer(OrderedDictSlicer):
opentype = ('instance',)
trackReferences = True
def sliceBody(self, streamable, banana):
yield reflect.qual(self.obj.__class__) # really a second index token
self.obj = getInstanceState(self.obj)
for t in OrderedDictSlicer.sliceBody(self, streamable, banana):
yield t
class ModuleSlicer(slicer.BaseSlicer):
opentype = ('module',)
trackReferences = True
def sliceBody(self, streamable, banana):
yield self.obj.__name__
class ClassSlicer(slicer.BaseSlicer):
opentype = ('class',)
trackReferences = True
def sliceBody(self, streamable, banana):
yield reflect.qual(self.obj)
class MethodSlicer(slicer.BaseSlicer):
opentype = ('method',)
trackReferences = True
def sliceBody(self, streamable, banana):
yield self.obj.im_func.__name__
yield self.obj.im_self
yield self.obj.im_class
class FunctionSlicer(slicer.BaseSlicer):
opentype = ('function',)
trackReferences = True
def sliceBody(self, streamable, banana):
name = self.obj.__name__
fullname = str(whichmodule(self.obj, self.obj.__name__)) + '.' + name
yield fullname
UnsafeSlicerTable = {}
UnsafeSlicerTable.update({
types.InstanceType: InstanceSlicer,
types.ModuleType: ModuleSlicer,
types.ClassType: ClassSlicer,
types.MethodType: MethodSlicer,
types.FunctionType: FunctionSlicer,
#types.TypeType: NewstyleClassSlicer,
# ???: NewstyleInstanceSlicer, # pickle uses obj.__reduce__ to help
# http://docs.python.org/lib/node68.html
})
class UnsafeRootSlicer(RootSlicer):
slicerTable = UnsafeSlicerTable
class StorageRootSlicer(UnsafeRootSlicer):
# some pieces taken from ScopedSlicer
def __init__(self, protocol):
UnsafeRootSlicer.__init__(self, protocol)
self.references = {}
def registerReference(self, refid, obj):
self.references[id(obj)] = (obj,refid)
def slicerForObject(self, obj):
# check for an object which was sent previously or has at least
# started sending
obj_refid = self.references.get(id(obj), None)
if obj_refid is not None:
return slicer.ReferenceSlicer(obj_refid[1])
# otherwise go upstream
return UnsafeRootSlicer.slicerForObject(self, obj)
################## Unslicers for "unsafe" things
def setInstanceState(inst, state):
"""Utility function to default to 'normal' state rules in unserialization.
"""
if hasattr(inst, "__setstate__"):
inst.__setstate__(state)
else:
inst.__dict__ = state
return inst
class Dummy:
def __repr__(self):
return "<Dummy %s>" % self.__dict__
def __cmp__(self, other):
if not type(other) == type(self):
return -1
return cmp(self.__dict__, other.__dict__)
UnsafeUnslicerRegistry = {}
class InstanceUnslicer(slicer.BaseUnslicer):
# this is an unsafe unslicer: an attacker could induce you to create
# instances of arbitrary classes with arbitrary attributes: VERY
# DANGEROUS!
opentype = ('instance',)
unslicerRegistry = UnsafeUnslicerRegistry
# danger: instances are mutable containers. If an attribute value is not
# yet available, __dict__ will hold a Deferred until it is. Other
# objects might be created and use our object before this is fixed.
# TODO: address this. Note that InstanceUnslicers aren't used in PB
# (where we have pb.Referenceable and pb.Copyable which have schema
# constraints and could have different restrictions like not being
# allowed to participate in reference loops).
def start(self, count):
self.d = {}
self.count = count
self.classname = None
self.attrname = None
self.deferred = Deferred()
self.protocol.setObject(count, self.deferred)
def checkToken(self, typebyte, size):
if self.classname is None:
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("InstanceUnslicer classname must be string")
elif self.attrname is None:
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("InstanceUnslicer keys must be STRINGs")
def receiveChild(self, obj, ready_deferred=None):
assert ready_deferred is None
if self.classname is None:
self.classname = obj
self.attrname = None
elif self.attrname is None:
self.attrname = obj
else:
if isinstance(obj, Deferred):
# TODO: this is an artificial restriction, and it might
# be possible to remove it, but I need to think through
# it carefully first
raise BananaError("unreferenceable object in attribute")
if self.d.has_key(self.attrname):
raise BananaError("duplicate attribute name '%s'" %
self.attrname)
self.setAttribute(self.attrname, obj)
self.attrname = None
def setAttribute(self, name, value):
self.d[name] = value
def receiveClose(self):
# you could attempt to do some value-checking here, but there would
# probably still be holes
#obj = Dummy()
klass = reflect.namedObject(self.classname)
assert type(klass) == types.ClassType # TODO: new-style classes
obj = instance(klass, {})
setInstanceState(obj, self.d)
self.protocol.setObject(self.count, obj)
self.deferred.callback(obj)
return obj, None
def describe(self):
if self.classname is None:
return "<??>"
me = "<%s>" % self.classname
if self.attrname is None:
return "%s.attrname??" % me
else:
return "%s.%s" % (me, self.attrname)
class ModuleUnslicer(slicer.LeafUnslicer):
opentype = ('module',)
unslicerRegistry = UnsafeUnslicerRegistry
finished = False
def checkToken(self, typebyte, size):
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("ModuleUnslicer only accepts strings")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.finished:
raise BananaError("ModuleUnslicer only accepts one string")
self.finished = True
# TODO: taste here!
mod = __import__(obj, {}, {}, "x")
self.mod = mod
def receiveClose(self):
if not self.finished:
raise BananaError("ModuleUnslicer requires a string")
return self.mod, None
class ClassUnslicer(slicer.LeafUnslicer):
opentype = ('class',)
unslicerRegistry = UnsafeUnslicerRegistry
finished = False
def checkToken(self, typebyte, size):
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("ClassUnslicer only accepts strings")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.finished:
raise BananaError("ClassUnslicer only accepts one string")
self.finished = True
# TODO: taste here!
self.klass = reflect.namedObject(obj)
def receiveClose(self):
if not self.finished:
raise BananaError("ClassUnslicer requires a string")
return self.klass, None
class MethodUnslicer(slicer.BaseUnslicer):
opentype = ('method',)
unslicerRegistry = UnsafeUnslicerRegistry
state = 0
im_func = None
im_self = None
im_class = None
# self.state:
# 0: expecting a string with the method name
# 1: expecting an instance (or None for unbound methods)
# 2: expecting a class
def checkToken(self, typebyte, size):
if self.state == 0:
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("MethodUnslicer methodname must be a string")
elif self.state == 1:
if typebyte != tokens.OPEN:
raise BananaError("MethodUnslicer instance must be OPEN")
elif self.state == 2:
if typebyte != tokens.OPEN:
raise BananaError("MethodUnslicer class must be an OPEN")
def doOpen(self, opentype):
# check the opentype
if self.state == 1:
if opentype[0] not in ("instance", "none"):
raise BananaError("MethodUnslicer instance must be " +
"instance or None")
elif self.state == 2:
if opentype[0] != "class":
raise BananaError("MethodUnslicer class must be a class")
unslicer = self.open(opentype)
# TODO: apply constraint
return unslicer
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.state == 0:
self.im_func = obj
self.state = 1
elif self.state == 1:
assert type(obj) in (types.InstanceType, types.NoneType)
self.im_self = obj
self.state = 2
elif self.state == 2:
assert type(obj) == types.ClassType # TODO: new-style classes?
self.im_class = obj
self.state = 3
else:
raise BananaError("MethodUnslicer only accepts three objects")
def receiveClose(self):
if self.state != 3:
raise BananaError("MethodUnslicer requires three objects")
if self.im_self is None:
meth = getattr(self.im_class, self.im_func)
# getattr gives us an unbound method
return meth, None
# TODO: late-available instances
#if isinstance(self.im_self, NotKnown):
# im = _InstanceMethod(self.im_name, self.im_self, self.im_class)
# return im
meth = self.im_class.__dict__[self.im_func]
# whereas __dict__ gives us a function
im = instancemethod(meth, self.im_self, self.im_class)
return im, None
class FunctionUnslicer(slicer.LeafUnslicer):
opentype = ('function',)
unslicerRegistry = UnsafeUnslicerRegistry
finished = False
def checkToken(self, typebyte, size):
if typebyte not in (tokens.STRING, tokens.VOCAB):
raise BananaError("FunctionUnslicer only accepts strings")
def receiveChild(self, obj, ready_deferred=None):
assert not isinstance(obj, Deferred)
assert ready_deferred is None
if self.finished:
raise BananaError("FunctionUnslicer only accepts one string")
self.finished = True
# TODO: taste here!
self.func = reflect.namedObject(obj)
def receiveClose(self):
if not self.finished:
raise BananaError("FunctionUnslicer requires a string")
return self.func, None
class UnsafeRootUnslicer(RootUnslicer):
topRegistries = [slicer.UnslicerRegistry,
slicer.BananaUnslicerRegistry,
UnsafeUnslicerRegistry]
openRegistries = [slicer.UnslicerRegistry,
UnsafeUnslicerRegistry]
class StorageRootUnslicer(UnsafeRootUnslicer, slicer.ScopedUnslicer):
# This version tracks references for the entire lifetime of the
# protocol. It is most appropriate for single-use purposes, such as a
# replacement for Pickle.
def __init__(self):
slicer.ScopedUnslicer.__init__(self)
UnsafeRootUnslicer.__init__(self)
def setObject(self, counter, obj):
return slicer.ScopedUnslicer.setObject(self, counter, obj)
def getObject(self, counter):
return slicer.ScopedUnslicer.getObject(self, counter)
################## The unsafe form of Banana that uses these (Un)Slicers
class StorageBanana(banana.Banana):
# this is "unsafe", in that it will do import() and create instances of
# arbitrary classes. It is also scoped at the root, so each
# StorageBanana should be used only once.
slicerClass = StorageRootSlicer
unslicerClass = StorageRootUnslicer
# it also stashes top-level objects in .obj, so you can retrieve them
# later
def receivedObject(self, obj):
self.object = obj
def serialize(obj):
"""Serialize an object graph into a sequence of bytes. Returns a Deferred
that fires with the sequence of bytes."""
b = StorageBanana()
b.transport = StringIO()
d = b.send(obj)
d.addCallback(lambda res: b.transport.getvalue())
return d
def unserialize(str):
"""Unserialize a sequence of bytes back into an object graph."""
b = StorageBanana()
b.dataReceived(str)
return b.object

View File

@ -0,0 +1,2 @@
# -*- test-case-name: foolscap.test -*-
"""foolscap tests"""

View File

@ -0,0 +1,329 @@
# -*- test-case-name: foolscap.test.test_pb -*-
import re
from zope.interface import implements, implementsOnly, implementedBy, Interface
from twisted.python import log
from twisted.internet import defer, reactor
from foolscap import broker
from foolscap import Referenceable, RemoteInterface
from foolscap.eventual import eventually, fireEventually, flushEventualQueue
from foolscap.remoteinterface import getRemoteInterface, RemoteMethodSchema, \
UnconstrainedMethod
from foolscap.schema import Any, SetOf, DictOf, ListOf, TupleOf, \
NumberConstraint, StringConstraint, IntegerConstraint
from twisted.python import failure
from twisted.internet.main import CONNECTION_DONE
def getRemoteInterfaceName(obj):
i = getRemoteInterface(obj)
return i.__remote_name__
class Loopback:
# The transport's promise is that write() can be treated as a
# synchronous, isolated function call: specifically, the Protocol's
# dataReceived() and connectionLost() methods shall not be called during
# a call to write().
connected = True
def write(self, data):
eventually(self._write, data)
def _write(self, data):
if not self.connected:
return
try:
# isolate exceptions: if one occurred on a regular TCP transport,
# they would hang up, so duplicate that here.
self.peer.dataReceived(data)
except:
f = failure.Failure()
log.err(f)
print "Loopback.write exception:", f
self.loseConnection(f)
def loseConnection(self, why=failure.Failure(CONNECTION_DONE)):
if self.connected:
self.connected = False
# this one is slightly weird because 'why' is a Failure
eventually(self._loseConnection, why)
def _loseConnection(self, why):
self.protocol.connectionLost(why)
self.peer.connectionLost(why)
def flush(self):
self.connected = False
return fireEventually()
Digits = re.compile("\d*")
MegaSchema1 = DictOf(str,
ListOf(TupleOf(SetOf(int, maxLength=10, mutable=True),
str, bool, int, long, float, None,
Any(), NumberConstraint(),
IntegerConstraint(),
StringConstraint(maxLength=100,
minLength=90,
regexp="\w+"),
StringConstraint(regexp=Digits),
),
maxLength=20),
maxKeys=5)
# containers should convert their arguments into schemas
MegaSchema2 = TupleOf(SetOf(int),
ListOf(int),
DictOf(int, str),
)
class RIHelper(RemoteInterface):
def set(obj=Any()): return bool
def set2(obj1=Any(), obj2=Any()): return bool
def append(obj=Any()): return Any()
def get(): return Any()
def echo(obj=Any()): return Any()
def defer(obj=Any()): return Any()
def hang(): return Any()
# test one of everything
def megaschema(obj1=MegaSchema1, obj2=MegaSchema2): return None
class HelperTarget(Referenceable):
implements(RIHelper)
d = None
def __init__(self, name="unnamed"):
self.name = name
def __repr__(self):
return "<HelperTarget %s>" % self.name
def waitfor(self):
self.d = defer.Deferred()
return self.d
def remote_set(self, obj):
self.obj = obj
if self.d:
self.d.callback(obj)
return True
def remote_set2(self, obj1, obj2):
self.obj1 = obj1
self.obj2 = obj2
return True
def remote_append(self, obj):
self.calls.append(obj)
def remote_get(self):
return self.obj
def remote_echo(self, obj):
self.obj = obj
return obj
def remote_defer(self, obj):
return fireEventually(obj)
def remote_hang(self):
self.d = defer.Deferred()
return self.d
def remote_megaschema(self, obj1, obj2):
self.obj1 = obj1
self.obj2 = obj2
return None
class TargetMixin:
def setUp(self):
self.loopbacks = []
def setupBrokers(self):
self.targetBroker = broker.LoggingBroker()
self.callingBroker = broker.LoggingBroker()
t1 = Loopback()
t1.peer = self.callingBroker
t1.protocol = self.targetBroker
self.targetBroker.transport = t1
self.loopbacks.append(t1)
t2 = Loopback()
t2.peer = self.targetBroker
t2.protocol = self.callingBroker
self.callingBroker.transport = t2
self.loopbacks.append(t2)
self.targetBroker.connectionMade()
self.callingBroker.connectionMade()
def tearDown(self):
# returns a Deferred which fires when the Loopbacks are drained
dl = [l.flush() for l in self.loopbacks]
d = defer.DeferredList(dl)
d.addCallback(flushEventualQueue)
return d
def setupTarget(self, target, txInterfaces=False):
# txInterfaces controls what interfaces the sender uses
# False: sender doesn't know about any interfaces
# True: sender gets the actual interface list from the target
# (list): sender uses an artificial interface list
puid = target.processUniqueID()
tracker = self.targetBroker.getTrackerForMyReference(puid, target)
tracker.send()
clid = tracker.clid
if txInterfaces:
iname = getRemoteInterfaceName(target)
else:
iname = None
rtracker = self.callingBroker.getTrackerForYourReference(clid, iname)
rr = rtracker.getRef()
return rr, target
def stall(self, res, timeout):
d = defer.Deferred()
reactor.callLater(timeout, d.callback, res)
return d
def poll(self, check_f, pollinterval=0.01):
# Return a Deferred, then call check_f periodically until it returns
# True, at which point the Deferred will fire.. If check_f raises an
# exception, the Deferred will errback.
d = defer.maybeDeferred(self._poll, None, check_f, pollinterval)
return d
def _poll(self, res, check_f, pollinterval):
if check_f():
return True
d = defer.Deferred()
d.addCallback(self._poll, check_f, pollinterval)
reactor.callLater(pollinterval, d.callback, None)
return d
class RIMyTarget(RemoteInterface):
# method constraints can be declared directly:
add1 = RemoteMethodSchema(_response=int, a=int, b=int)
free = UnconstrainedMethod()
# or through their function definitions:
def add(a=int, b=int): return int
#add = schema.callable(add) # the metaclass makes this unnecessary
# but it could be used for adding options or something
def join(a=str, b=str, c=int): return str
def getName(): return str
disputed = RemoteMethodSchema(_response=int, a=int)
def fail(): return str # actually raises an exception
class RIMyTarget2(RemoteInterface):
__remote_name__ = "RIMyTargetInterface2"
sub = RemoteMethodSchema(_response=int, a=int, b=int)
# For some tests, we want the two sides of the connection to disagree about
# the contents of the RemoteInterface they are using. This is remarkably
# difficult to accomplish within a single process. We do it by creating
# something that behaves just barely enough like a RemoteInterface to work.
class FakeTarget(dict):
pass
RIMyTarget3 = FakeTarget()
RIMyTarget3.__remote_name__ = RIMyTarget.__remote_name__
RIMyTarget3['disputed'] = RemoteMethodSchema(_response=int, a=str)
RIMyTarget3['disputed'].name = "disputed"
RIMyTarget3['disputed'].interface = RIMyTarget3
RIMyTarget3['disputed2'] = RemoteMethodSchema(_response=str, a=int)
RIMyTarget3['disputed2'].name = "disputed"
RIMyTarget3['disputed2'].interface = RIMyTarget3
RIMyTarget3['sub'] = RemoteMethodSchema(_response=int, a=int, b=int)
RIMyTarget3['sub'].name = "sub"
RIMyTarget3['sub'].interface = RIMyTarget3
class Target(Referenceable):
implements(RIMyTarget)
def __init__(self, name=None):
self.calls = []
self.name = name
def getMethodSchema(self, methodname):
return None
def remote_add(self, a, b):
self.calls.append((a,b))
return a+b
remote_add1 = remote_add
def remote_free(self, *args, **kwargs):
self.calls.append((args, kwargs))
return "bird"
def remote_getName(self):
return self.name
def remote_disputed(self, a):
return 24
def remote_fail(self):
raise ValueError("you asked me to fail")
class TargetWithoutInterfaces(Target):
# undeclare the RIMyTarget interface
implementsOnly(implementedBy(Referenceable))
class BrokenTarget(Referenceable):
implements(RIMyTarget)
def remote_add(self, a, b):
return "error"
class IFoo(Interface):
# non-remote Interface
pass
class Foo(Referenceable):
implements(IFoo)
class RIDummy(RemoteInterface):
pass
class RITypes(RemoteInterface):
def returns_none(work=bool): return None
def takes_remoteinterface(a=RIDummy): return str
def returns_remoteinterface(work=int): return RIDummy
def takes_interface(a=IFoo): return str
def returns_interface(work=bool): return IFoo
class DummyTarget(Referenceable):
implements(RIDummy)
class TypesTarget(Referenceable):
implements(RITypes)
def remote_returns_none(self, work):
if work:
return None
return "not None"
def remote_takes_remoteinterface(self, a):
# TODO: really, I want to just be able to say:
# if RIDummy.providedBy(a):
iface = a.tracker.interface
if iface and iface == RIDummy:
return "good"
raise RuntimeError("my argument (%s) should provide RIDummy, "
"but doesn't" % a)
def remote_returns_remoteinterface(self, work):
if work == 1:
return DummyTarget()
if work == -1:
return TypesTarget()
return 15
def remote_takes_interface(self, a):
if IFoo.providedBy(a):
return "good"
raise RuntimeError("my argument (%s) should provide IFoo, but doesn't" % a)
def remote_returns_interface(self, work):
if work:
return Foo()
return "not implementor of IFoo"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,387 @@
import gc
import re
import sets
if False:
import sys
from twisted.python import log
log.startLogging(sys.stderr)
from twisted.python import failure
from twisted.internet import reactor, defer
from twisted.trial import unittest
from twisted.internet.main import CONNECTION_LOST
from foolscap.tokens import Violation
from foolscap.eventual import flushEventualQueue
from foolscap.test.common import HelperTarget, TargetMixin
from foolscap.test.common import RIMyTarget, Target, TargetWithoutInterfaces, \
BrokenTarget
class Unsendable:
pass
class TestCall(TargetMixin, unittest.TestCase):
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
def testCall1(self):
# this is done without interfaces
rr, target = self.setupTarget(TargetWithoutInterfaces())
d = rr.callRemote("add", a=1, b=2)
d.addCallback(lambda res: self.failUnlessEqual(res, 3))
d.addCallback(lambda res: self.failUnlessEqual(target.calls, [(1,2)]))
d.addCallback(self._testCall1_1, rr)
return d
testCall1.timeout = 3
def _testCall1_1(self, res, rr):
# the caller still holds the RemoteReference
self.failUnless(self.callingBroker.yourReferenceByCLID.has_key(1))
# release the RemoteReference. This does two things: 1) the
# callingBroker will forget about it. 2) they will send a decref to
# the targetBroker so *they* can forget about it.
del rr # this fires a DecRef
gc.collect() # make sure
# we need to give it a moment to deliver the DecRef message and act
# on it
d = defer.Deferred()
reactor.callLater(0.1, d.callback, None)
d.addCallback(self._testCall1_2)
return d
def _testCall1_2(self, res):
self.failIf(self.callingBroker.yourReferenceByCLID.has_key(1))
self.failIf(self.targetBroker.myReferenceByCLID.has_key(1))
def testCall1a(self):
# no interfaces, but use positional args
rr, target = self.setupTarget(TargetWithoutInterfaces())
d = rr.callRemote("add", 1, 2)
d.addCallback(lambda res: self.failUnlessEqual(res, 3))
d.addCallback(lambda res: self.failUnlessEqual(target.calls, [(1,2)]))
return d
testCall1a.timeout = 2
def testCall1b(self):
# no interfaces, use both positional and keyword args
rr, target = self.setupTarget(TargetWithoutInterfaces())
d = rr.callRemote("add", 1, b=2)
d.addCallback(lambda res: self.failUnlessEqual(res, 3))
d.addCallback(lambda res: self.failUnlessEqual(target.calls, [(1,2)]))
return d
testCall1b.timeout = 2
def testFail1(self):
# this is done without interfaces
rr, target = self.setupTarget(TargetWithoutInterfaces())
d = rr.callRemote("fail")
self.failIf(target.calls)
d.addBoth(self._testFail1_1)
return d
testFail1.timeout = 2
def _testFail1_1(self, f):
# f should be a CopiedFailure
self.failUnless(isinstance(f, failure.Failure),
"Hey, we didn't fail: %s" % f)
self.failUnless(f.check(ValueError),
"wrong exception type: %s" % f)
self.failUnlessSubstring("you asked me to fail", f.value)
def testFail2(self):
# this is done without interfaces
rr, target = self.setupTarget(TargetWithoutInterfaces())
d = rr.callRemote("add", a=1, b=2, c=3)
# add() does not take a 'c' argument, so we get a TypeError here
self.failIf(target.calls)
d.addBoth(self._testFail2_1)
return d
testFail2.timeout = 2
def _testFail2_1(self, f):
self.failUnless(isinstance(f, failure.Failure),
"Hey, we didn't fail: %s" % f)
self.failUnless(f.check(TypeError),
"wrong exception type: %s" % f.type)
self.failUnlessSubstring("remote_add() got an unexpected keyword "
"argument 'c'", f.value)
def testFail3(self):
# this is done without interfaces
rr, target = self.setupTarget(TargetWithoutInterfaces())
d = rr.callRemote("bogus", a=1, b=2)
# the target does not have .bogus method, so we get an AttributeError
self.failIf(target.calls)
d.addBoth(self._testFail3_1)
return d
testFail3.timeout = 2
def _testFail3_1(self, f):
self.failUnless(isinstance(f, failure.Failure),
"Hey, we didn't fail: %s" % f)
self.failUnless(f.check(AttributeError),
"wrong exception type: %s" % f.type)
self.failUnlessSubstring("TargetWithoutInterfaces", str(f))
self.failUnlessSubstring(" has no attribute 'remote_bogus'", str(f))
def testCall2(self):
# server end uses an interface this time, but not the client end
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("add", a=3, b=4, _useSchema=False)
# the schema is enforced upon receipt
d.addCallback(lambda res: self.failUnlessEqual(res, 7))
return d
testCall2.timeout = 2
def testCall3(self):
# use interface on both sides
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote('add', 3, 4) # enforces schemas
d.addCallback(lambda res: self.failUnlessEqual(res, 7))
return d
testCall3.timeout = 2
def testCall4(self):
# call through a manually-defined RemoteMethodSchema
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("add", 3, 4, _methodConstraint=RIMyTarget['add1'])
d.addCallback(lambda res: self.failUnlessEqual(res, 7))
return d
testCall4.timeout = 2
def testMegaSchema(self):
# try to exercise all our constraints at once
rr, target = self.setupTarget(HelperTarget())
t = (sets.Set([1, 2, 3]),
"str", True, 12, 12L, 19.3, None,
"any", 14.3,
15,
"a"*95,
"1234567890",
)
obj1 = {"key": [t]}
obj2 = (sets.Set([1,2,3]), [1,2,3], {1:"two"})
d = rr.callRemote("megaschema", obj1, obj2)
d.addCallback(lambda res: self.failUnlessEqual(res, None))
return d
def testUnconstrainedMethod(self):
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote('free', 3, 4, x="boo")
def _check(res):
self.failUnlessEqual(res, "bird")
self.failUnlessEqual(target.calls, [((3,4), {"x": "boo"})])
d.addCallback(_check)
return d
def testFailWrongMethodLocal(self):
# the caller knows that this method does not really exist
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("bogus") # RIMyTarget doesn't implement .bogus()
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongMethodLocal_1)
return d
testFailWrongMethodLocal.timeout = 2
def _testFailWrongMethodLocal_1(self, f):
self.failUnless(f.check(Violation))
self.failUnless(re.search(r'RIMyTarget\(.*\) does not offer bogus',
str(f)))
def testFailWrongMethodRemote(self):
# if the target doesn't specify any remote interfaces, then the
# calling side shouldn't try to do any checking. The problem is
# caught on the target side.
rr, target = self.setupTarget(Target(), False)
d = rr.callRemote("bogus") # RIMyTarget doesn't implement .bogus()
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongMethodRemote_1)
return d
testFailWrongMethodRemote.timeout = 2
def _testFailWrongMethodRemote_1(self, f):
self.failUnless(f.check(Violation))
self.failUnlessSubstring("method 'bogus' not defined in RIMyTarget",
str(f))
def testFailWrongMethodRemote2(self):
# call a method which doesn't actually exist. The sender thinks
# they're ok but the recipient catches the violation
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("bogus", _useSchema=False)
# RIMyTarget2 has a 'sub' method, but RIMyTarget (the real interface)
# does not
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongMethodRemote2_1)
d.addCallback(lambda res: self.failIf(target.calls))
return d
testFailWrongMethodRemote2.timeout = 2
def _testFailWrongMethodRemote2_1(self, f):
self.failUnless(f.check(Violation))
self.failUnless(re.search(r'RIMyTarget\(.*\) does not offer bogus',
str(f)))
def testFailWrongArgsLocal1(self):
# we violate the interface (extra arg), and the sender should catch it
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("add", a=1, b=2, c=3)
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongArgsLocal1_1)
d.addCallback(lambda res: self.failIf(target.calls))
return d
testFailWrongArgsLocal1.timeout = 2
def _testFailWrongArgsLocal1_1(self, f):
self.failUnless(f.check(Violation))
self.failUnlessSubstring("unknown argument 'c'", str(f.value))
def testFailWrongArgsLocal2(self):
# we violate the interface (bad arg), and the sender should catch it
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("add", a=1, b="two")
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongArgsLocal2_1)
d.addCallback(lambda res: self.failIf(target.calls))
return d
testFailWrongArgsLocal2.timeout = 2
def _testFailWrongArgsLocal2_1(self, f):
self.failUnless(f.check(Violation))
self.failUnlessSubstring("not a number", str(f.value))
def testFailWrongArgsRemote1(self):
# the sender thinks they're ok but the recipient catches the
# violation
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("add", a=1, b="foo", _useSchema=False)
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongArgsRemote1_1)
d.addCallbacks(lambda res: self.failIf(target.calls))
return d
testFailWrongArgsRemote1.timeout = 2
def _testFailWrongArgsRemote1_1(self, f):
self.failUnless(f.check(Violation))
self.failUnlessSubstring("STRING token rejected by IntegerConstraint",
f.value)
self.failUnlessSubstring("<RootUnslicer>.<methodcall", f.value)
self.failUnlessSubstring(" methodname=add", f.value)
self.failUnlessSubstring("<arguments arg[b]>", f.value)
def testFailWrongReturnRemote(self):
rr, target = self.setupTarget(BrokenTarget(), True)
d = rr.callRemote("add", 3, 4) # violates return constraint
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongReturnRemote_1)
return d
testFailWrongReturnRemote.timeout = 2
def _testFailWrongReturnRemote_1(self, f):
self.failUnless(f.check(Violation))
self.failUnlessSubstring("in return value of <foolscap.test.common.BrokenTarget object at ", f.value)
self.failUnlessSubstring(">.add", f.value)
self.failUnlessSubstring("not a number", f.value)
def testFailWrongReturnLocal(self):
# the target returns a value which violates our _resultConstraint
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote("add", a=1, b=2, _resultConstraint=str)
# The target returns an int, which matches the schema they're using,
# so they think they're ok. We've overridden our expectations to
# require a string.
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testFailWrongReturnLocal_1)
# the method should have been run
d.addCallback(lambda res: self.failUnless(target.calls))
return d
testFailWrongReturnLocal.timeout = 2
def _testFailWrongReturnLocal_1(self, f):
self.failUnless(f.check(Violation))
self.failUnlessSubstring("INT token rejected by StringConstraint",
str(f))
self.failUnlessSubstring("in inbound method results", str(f))
self.failUnlessSubstring("<RootUnslicer>.Answer(req=1)", str(f))
def testDefer(self):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("defer", obj=12)
d.addCallback(lambda res: self.failUnlessEqual(res, 12))
return d
testDefer.timeout = 2
def testDisconnect1(self):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("hang")
e = RuntimeError("lost connection")
rr.tracker.broker.transport.loseConnection(e)
d.addCallbacks(lambda res: self.fail("should have failed"),
lambda why: why.trap(RuntimeError) and None)
return d
testDisconnect1.timeout = 2
def disconnected(self, *args, **kwargs):
self.lost = 1
self.lost_args = (args, kwargs)
def testDisconnect2(self):
rr, target = self.setupTarget(HelperTarget())
self.lost = 0
rr.notifyOnDisconnect(self.disconnected)
rr.tracker.broker.transport.loseConnection(CONNECTION_LOST)
d = flushEventualQueue()
def _check(res):
self.failUnless(self.lost)
self.failUnlessEqual(self.lost_args, ((),{}))
d.addCallback(_check)
return d
def testDisconnect3(self):
rr, target = self.setupTarget(HelperTarget())
self.lost = 0
m = rr.notifyOnDisconnect(self.disconnected)
rr.dontNotifyOnDisconnect(m)
rr.tracker.broker.transport.loseConnection(CONNECTION_LOST)
d = flushEventualQueue()
d.addCallback(lambda res: self.failIf(self.lost))
return d
def testDisconnect4(self):
rr, target = self.setupTarget(HelperTarget())
self.lost = 0
rr.notifyOnDisconnect(self.disconnected, "arg", foo="kwarg")
rr.tracker.broker.transport.loseConnection(CONNECTION_LOST)
d = flushEventualQueue()
def _check(res):
self.failUnless(self.lost)
self.failUnlessEqual(self.lost_args, (("arg",),
{"foo": "kwarg"}))
d.addCallback(_check)
return d
def testUnsendable(self):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set", obj=Unsendable())
d.addCallbacks(lambda res: self.fail("should have failed"),
self._testUnsendable_1)
return d
testUnsendable.timeout = 2
def _testUnsendable_1(self, why):
self.failUnless(why.check(Violation))
self.failUnlessSubstring("cannot serialize", why.value.args[0])
class TestCallOnly(TargetMixin, unittest.TestCase):
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
def testCallOnly(self):
rr, target = self.setupTarget(TargetWithoutInterfaces())
ret = rr.callRemoteOnly("add", a=1, b=2)
self.failUnlessIdentical(ret, None)
# since we don't have a Deferred to wait upon, we just have to poll
# for the call to take place. It should happen pretty quickly.
def _check():
if target.calls:
self.failUnlessEqual(target.calls, [(1,2)])
return True
return False
d = self.poll(_check)
return d
testCallOnly.timeout = 2

View File

@ -0,0 +1,291 @@
from twisted.trial import unittest
from twisted.python import components, failure
from foolscap.test.common import TargetMixin, HelperTarget
from foolscap import copyable, tokens
from foolscap import Copyable, RemoteCopy
from foolscap.tokens import Violation
# MyCopyable1 is the basic Copyable/RemoteCopy pair, using auto-registration.
class MyCopyable1(Copyable):
typeToCopy = "foolscap.test_copyable.MyCopyable1"
pass
class MyRemoteCopy1(RemoteCopy):
copytype = MyCopyable1.typeToCopy
pass
#registerRemoteCopy(MyCopyable1.typeToCopy, MyRemoteCopy1)
# MyCopyable2 overrides the various Copyable/RemoteCopy methods. It
# also sets 'copytype' to auto-register with a matching name
class MyCopyable2(Copyable):
def getTypeToCopy(self):
return "MyCopyable2name"
def getStateToCopy(self):
return {"a": 1, "b": self.b}
class MyRemoteCopy2(RemoteCopy):
copytype = "MyCopyable2name"
def setCopyableState(self, state):
self.c = 1
self.d = state["b"]
# MyCopyable3 uses a custom Slicer and a custom Unslicer
class MyCopyable3:
def getAlternateCopyableState(self):
return {"e": 2}
class MyCopyable3Slicer(copyable.CopyableSlicer):
def slice(self, streamable, banana):
yield 'copyable'
yield "MyCopyable3name"
state = self.obj.getAlternateCopyableState()
for k,v in state.iteritems():
yield k
yield v
class MyRemoteCopy3:
pass
class MyRemoteCopy3Unslicer(copyable.RemoteCopyUnslicer):
def __init__(self):
self.schema = None
def factory(self, state):
obj = MyRemoteCopy3()
obj.__dict__ = state
return obj
def receiveClose(self):
obj,d = copyable.RemoteCopyUnslicer.receiveClose(self)
obj.f = "yes"
return obj, d
# register MyCopyable3Slicer as an ISlicer adapter for MyCopyable3, so we
# can verify that it overrides the inherited CopyableSlicer behavior. We
# also register an Unslicer to create the results.
components.registerAdapter(MyCopyable3Slicer, MyCopyable3, tokens.ISlicer)
copyable.registerRemoteCopyUnslicerFactory("MyCopyable3name",
MyRemoteCopy3Unslicer)
# MyCopyable4 uses auto-registration, and adds a stateSchema
class MyCopyable4(Copyable):
typeToCopy = "foolscap.test_copyable.MyCopyable4"
pass
class MyRemoteCopy4(RemoteCopy):
copytype = MyCopyable4.typeToCopy
stateSchema = copyable.AttributeDictConstraint(('foo', int),
('bar', str))
pass
# MyCopyable5 disables auto-registration
class MyRemoteCopy5(RemoteCopy):
copytype = None # disable auto-registration
class Copyable(TargetMixin, unittest.TestCase):
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
if 0:
print
self.callingBroker.doLog = "TX"
self.targetBroker.doLog = " rx"
def send(self, arg):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set", obj=arg)
d.addCallback(self.failUnless)
# some of these tests require that we return a Failure object, so we
# have to wrap this in a tuple to survive the Deferred.
d.addCallback(lambda res: (target.obj,))
return d
def testCopy0(self):
d = self.send(1)
d.addCallback(self.failUnlessEqual, (1,))
return d
def testFailure1(self):
self.callingBroker.unsafeTracebacks = True
try:
raise RuntimeError("message here")
except:
f0 = failure.Failure()
d = self.send(f0)
d.addCallback(self._testFailure1_1)
return d
def _testFailure1_1(self, (f,)):
#print "CopiedFailure is:", f
#print f.__dict__
self.failUnlessEqual(f.type, "exceptions.RuntimeError")
self.failUnlessEqual(f.value, "message here")
self.failUnlessEqual(f.frames, [])
self.failUnlessEqual(f.tb, None)
self.failUnlessEqual(f.stack, [])
# there should be a traceback
self.failUnless(f.traceback.find("raise RuntimeError") != -1)
def testFailure2(self):
self.callingBroker.unsafeTracebacks = False
try:
raise RuntimeError("message here")
except:
f0 = failure.Failure()
d = self.send(f0)
d.addCallback(self._testFailure2_1)
return d
def _testFailure2_1(self, (f,)):
#print "CopiedFailure is:", f
#print f.__dict__
self.failUnlessEqual(f.type, "exceptions.RuntimeError")
self.failUnlessEqual(f.value, "message here")
self.failUnlessEqual(f.frames, [])
self.failUnlessEqual(f.tb, None)
self.failUnlessEqual(f.stack, [])
# there should not be a traceback
self.failUnlessEqual(f.traceback, "Traceback unavailable\n")
def testCopy1(self):
obj = MyCopyable1() # just copies the dict
obj.a = 12
obj.b = "foo"
d = self.send(obj)
d.addCallback(self._testCopy1_1)
return d
def _testCopy1_1(self, (res,)):
self.failUnless(isinstance(res, MyRemoteCopy1))
self.failUnlessEqual(res.a, 12)
self.failUnlessEqual(res.b, "foo")
def testCopy2(self):
obj = MyCopyable2() # has a custom getStateToCopy
obj.a = 12 # ignored
obj.b = "foo"
d = self.send(obj)
d.addCallback(self._testCopy2_1)
return d
def _testCopy2_1(self, (res,)):
self.failUnless(isinstance(res, MyRemoteCopy2))
self.failUnlessEqual(res.c, 1)
self.failUnlessEqual(res.d, "foo")
self.failIf(hasattr(res, "a"))
def testCopy3(self):
obj = MyCopyable3() # has a custom Slicer
obj.a = 12 # ignored
obj.b = "foo" # ignored
d = self.send(obj)
d.addCallback(self._testCopy3_1)
return d
def _testCopy3_1(self, (res,)):
self.failUnless(isinstance(res, MyRemoteCopy3))
self.failUnlessEqual(res.e, 2)
self.failUnlessEqual(res.f, "yes")
self.failIf(hasattr(res, "a"))
def testCopy4(self):
obj = MyCopyable4()
obj.foo = 12
obj.bar = "bar"
d = self.send(obj)
d.addCallback(self._testCopy4_1, obj)
return d
def _testCopy4_1(self, (res,), obj):
self.failUnless(isinstance(res, MyRemoteCopy4))
self.failUnlessEqual(res.foo, 12)
self.failUnlessEqual(res.bar, "bar")
obj.bad = "unwanted attribute"
d = self.send(obj)
d.addCallbacks(lambda res: self.fail("this was supposed to fail"),
self._testCopy4_2, errbackArgs=(obj,))
return d
def _testCopy4_2(self, why, obj):
why.trap(Violation)
self.failUnlessSubstring("unknown attribute 'bad'", str(why))
del obj.bad
obj.foo = "not a number"
d = self.send(obj)
d.addCallbacks(lambda res: self.fail("this was supposed to fail"),
self._testCopy4_3, errbackArgs=(obj,))
return d
def _testCopy4_3(self, why, obj):
why.trap(Violation)
self.failUnlessSubstring("STRING token rejected by IntegerConstraint",
str(why))
obj.foo = 12
obj.bar = "very long " * 1000
d = self.send(obj)
d.addCallbacks(lambda res: self.fail("this was supposed to fail"),
self._testCopy4_4)
return d
def _testCopy4_4(self, why):
why.trap(Violation)
self.failUnlessSubstring("token too large", str(why))
class Registration(unittest.TestCase):
def testRegistration(self):
rc_classes = copyable.debug_RemoteCopyClasses
copyable_classes = rc_classes.values()
self.failUnless(MyRemoteCopy1 in copyable_classes)
self.failUnless(MyRemoteCopy2 in copyable_classes)
self.failUnlessIdentical(rc_classes["MyCopyable2name"],
MyRemoteCopy2)
self.failIf(MyRemoteCopy5 in copyable_classes)
##############
# verify that ICopyable adapters are actually usable
class TheThirdPartyClassThatIWantToCopy:
def __init__(self, a, b):
self.a = a
self.b = b
def copy_ThirdPartyClass(orig):
return "TheThirdPartyClassThatIWantToCopy_name", orig.__dict__
copyable.registerCopier(TheThirdPartyClassThatIWantToCopy,
copy_ThirdPartyClass)
def make_ThirdPartyClass(state):
# unpack the state into constructor arguments
a = state['a']; b = state['b']
# now create the object with the constructor
return TheThirdPartyClassThatIWantToCopy(a, b)
copyable.registerRemoteCopyFactory("TheThirdPartyClassThatIWantToCopy_name",
make_ThirdPartyClass)
class Adaptation(TargetMixin, unittest.TestCase):
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
if 0:
print
self.callingBroker.doLog = "TX"
self.targetBroker.doLog = " rx"
def send(self, arg):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set", obj=arg)
d.addCallback(self.failUnless)
# some of these tests require that we return a Failure object, so we
# have to wrap this in a tuple to survive the Deferred.
d.addCallback(lambda res: (target.obj,))
return d
def testAdaptation(self):
obj = TheThirdPartyClassThatIWantToCopy(45, 91)
d = self.send(obj)
d.addCallback(self._testAdaptation_1)
return d
def _testAdaptation_1(self, (res,)):
self.failUnless(isinstance(res, TheThirdPartyClassThatIWantToCopy))
self.failUnlessEqual(res.a, 45)
self.failUnlessEqual(res.b, 91)

View File

@ -0,0 +1,197 @@
import re
from twisted.trial import unittest
from zope.interface import implements
from twisted.internet import defer
from foolscap import pb
from foolscap import RemoteInterface, Referenceable, Tub
from foolscap.remoteinterface import RemoteMethodSchema
from foolscap.eventual import flushEventualQueue
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
class RIMyCryptoTarget(RemoteInterface):
# method constraints can be declared directly:
add1 = RemoteMethodSchema(_response=int, a=int, b=int)
# or through their function definitions:
def add(a=int, b=int): return int
#add = schema.callable(add) # the metaclass makes this unnecessary
# but it could be used for adding options or something
def join(a=str, b=str, c=int): return str
def getName(): return str
class Target(Referenceable):
implements(RIMyCryptoTarget)
def __init__(self, name=None):
self.calls = []
self.name = name
def getMethodSchema(self, methodname):
return None
def remote_add(self, a, b):
self.calls.append((a,b))
return a+b
remote_add1 = remote_add
def remote_getName(self):
return self.name
def remote_disputed(self, a):
return 24
def remote_fail(self):
raise ValueError("you asked me to fail")
class UsefulMixin:
num_services = 2
def setUp(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
self.services = []
for i in range(self.num_services):
s = Tub()
s.startService()
self.services.append(s)
def tearDown(self):
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(self._tearDown_1)
return d
def _tearDown_1(self, res):
self.failIf(pb.Listeners)
return flushEventualQueue()
class TestPersist(UsefulMixin, unittest.TestCase):
num_services = 2
def testPersist(self):
t1 = Target()
s1,s2 = self.services
l1 = s1.listenOn("0")
port = l1.getPortnum()
s1.setLocation("127.0.0.1:%d" % port)
public_url = s1.registerReference(t1, "name")
self.failUnless(public_url.startswith("pb:"))
d = defer.maybeDeferred(s1.stopService)
d.addCallback(self._testPersist_1, s1, s2, t1, public_url, port)
return d
testPersist.timeout = 5
def _testPersist_1(self, res, s1, s2, t1, public_url, port):
self.services.remove(s1)
s3 = Tub(certData=s1.getCertData())
s3.startService()
self.services.append(s3)
t2 = Target()
l3 = s3.listenOn("0")
newport = l3.getPortnum()
s3.setLocation("127.0.0.1:%d" % newport)
s3.registerReference(t2, "name")
# now patch the URL to replace the port number
newurl = re.sub(":%d/" % port, ":%d/" % newport, public_url)
d = s2.getReference(newurl)
d.addCallback(lambda rr: rr.callRemote("add", a=1, b=2))
d.addCallback(self.failUnlessEqual, 3)
d.addCallback(self._testPersist_2, t1, t2)
return d
def _testPersist_2(self, res, t1, t2):
self.failUnlessEqual(t1.calls, [])
self.failUnlessEqual(t2.calls, [(1,2)])
class TestListeners(UsefulMixin, unittest.TestCase):
num_services = 3
def testListenOn(self):
s1 = self.services[0]
l = s1.listenOn("0")
self.failUnless(isinstance(l, pb.Listener))
self.failUnlessEqual(len(s1.getListeners()), 1)
self.failUnlessEqual(len(pb.Listeners), 1)
s1.stopListeningOn(l)
self.failUnlessEqual(len(s1.getListeners()), 0)
self.failUnlessEqual(len(pb.Listeners), 0)
def testGetPort1(self):
s1,s2,s3 = self.services
s1.listenOn("0")
listeners = s1.getListeners()
self.failUnlessEqual(len(listeners), 1)
portnum = listeners[0].getPortnum()
self.failUnless(portnum) # not 0, not None, must be *something*
def testGetPort2(self):
s1,s2,s3 = self.services
s1.listenOn("0")
listeners = s1.getListeners()
self.failUnlessEqual(len(listeners), 1)
portnum = listeners[0].getPortnum()
self.failUnless(portnum) # not 0, not None, must be *something*
s1.listenOn("0") # listen on a second port too
l2 = s1.getListeners()
self.failUnlessEqual(len(l2), 2)
self.failIfEqual(l2[0].getPortnum(), l2[1].getPortnum())
s2.listenOn(l2[0])
l3 = s2.getListeners()
self.failUnlessIdentical(l2[0], l3[0])
self.failUnlessEqual(l2[0].getPortnum(), l3[0].getPortnum())
def testShared(self):
s1,s2,s3 = self.services
# s1 and s2 will share a Listener
l1 = s1.listenOn("tcp:0:interface=127.0.0.1")
s1.setLocation("127.0.0.1:%d" % l1.getPortnum())
s2.listenOn(l1)
s2.setLocation("127.0.0.1:%d" % l1.getPortnum())
t1 = Target("one")
t2 = Target("two")
self.targets = [t1,t2]
url1 = s1.registerReference(t1, "target")
url2 = s2.registerReference(t2, "target")
self.urls = [url1, url2]
d = s3.getReference(url1)
d.addCallback(lambda ref: ref.callRemote('add', a=1, b=1))
d.addCallback(lambda res: s3.getReference(url2))
d.addCallback(lambda ref: ref.callRemote('add', a=2, b=2))
d.addCallback(self._testShared_1)
return d
testShared.timeout = 5
def _testShared_1(self, res):
t1,t2 = self.targets
self.failUnlessEqual(t1.calls, [(1,1)])
self.failUnlessEqual(t2.calls, [(2,2)])
def testSharedTransfer(self):
s1,s2,s3 = self.services
# s1 and s2 will share a Listener
l1 = s1.listenOn("tcp:0:interface=127.0.0.1")
s1.setLocation("127.0.0.1:%d" % l1.getPortnum())
s2.listenOn(l1)
s2.setLocation("127.0.0.1:%d" % l1.getPortnum())
self.failUnless(l1.parentTub is s1)
s1.stopListeningOn(l1)
self.failUnless(l1.parentTub is s2)
s3.listenOn(l1)
self.failUnless(l1.parentTub is s2)
d = s2.stopService()
d.addCallback(self._testSharedTransfer_1, l1, s2, s3)
return d
testSharedTransfer.timeout = 5
def _testSharedTransfer_1(self, res, l1, s2, s3):
self.services.remove(s2)
self.failUnless(l1.parentTub is s3)
def testClone(self):
s1,s2,s3 = self.services
l1 = s1.listenOn("tcp:0:interface=127.0.0.1")
s1.setLocation("127.0.0.1:%d" % l1.getPortnum())
s4 = s1.clone()
s4.startService()
self.services.append(s4)
self.failUnlessEqual(s1.getListeners(), s4.getListeners())

View File

@ -0,0 +1,42 @@
from twisted.trial import unittest
from foolscap.eventual import eventually, fireEventually, flushEventualQueue
class TestEventual(unittest.TestCase):
def tearDown(self):
return flushEventualQueue()
def testSend(self):
results = []
eventually(results.append, 1)
self.failIf(results)
def _check():
self.failUnlessEqual(results, [1])
eventually(_check)
def _check2():
self.failUnlessEqual(results, [1,2])
eventually(results.append, 2)
eventually(_check2)
def testFlush(self):
results = []
eventually(results.append, 1)
eventually(results.append, 2)
d = flushEventualQueue()
def _check(res):
self.failUnlessEqual(results, [1,2])
d.addCallback(_check)
return d
def testFire(self):
results = []
fireEventually(1).addCallback(results.append)
fireEventually(2).addCallback(results.append)
self.failIf(results)
def _check(res):
self.failUnlessEqual(results, [1,2])
d = flushEventualQueue()
d.addCallback(_check)
return d

View File

@ -0,0 +1,231 @@
from twisted.trial import unittest
from twisted.internet import defer
from twisted.internet.error import ConnectionDone, ConnectionLost
from foolscap import Tub, UnauthenticatedTub
from foolscap.referenceable import RemoteReference
from foolscap.test.common import HelperTarget
from foolscap.eventual import flushEventualQueue
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
# we use authenticated tubs if possible. If crypto is not available, fall
# back to unauthenticated ones
GoodEnoughTub = UnauthenticatedTub
if crypto_available:
GoodEnoughTub = Tub
def ignoreConnectionDone(f):
f.trap(ConnectionDone, ConnectionLost)
return None
class Gifts(unittest.TestCase):
# Here we test the three-party introduction process as depicted in the
# classic Granovetter diagram. Alice has a reference to Bob and another
# one to Carol. Alice wants to give her Carol-reference to Bob, by
# including it as the argument to a method she invokes on her
# Bob-reference.
debug = False
def setUp(self):
self.services = [GoodEnoughTub(), GoodEnoughTub(), GoodEnoughTub()]
self.tubA, self.tubB, self.tubC = self.services
for s in self.services:
s.startService()
l = s.listenOn("tcp:0:interface=127.0.0.1")
s.setLocation("127.0.0.1:%d" % l.getPortnum())
def tearDown(self):
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(flushEventualQueue)
return d
def createCharacters(self):
self.alice = HelperTarget("alice")
self.bob = HelperTarget("bob")
self.bob_url = self.tubB.registerReference(self.bob)
self.carol = HelperTarget("carol")
self.carol_url = self.tubC.registerReference(self.carol)
self.cindy = HelperTarget("cindy")
# cindy is Carol's little sister. She doesn't have a phone, but
# Carol might talk about her anyway.
def createInitialReferences(self):
# we must start by giving Alice a reference to both Bob and Carol.
if self.debug: print "Alice gets Bob"
d = self.tubA.getReference(self.bob_url)
def _aliceGotBob(abob):
if self.debug: print "Alice got bob"
self.abob = abob # Alice's reference to Bob
if self.debug: print "Alice gets carol"
d = self.tubA.getReference(self.carol_url)
return d
d.addCallback(_aliceGotBob)
def _aliceGotCarol(acarol):
if self.debug: print "Alice got carol"
self.acarol = acarol # Alice's reference to Carol
d.addCallback(_aliceGotCarol)
return d
def testGift(self):
#defer.setDebugging(True)
self.createCharacters()
d = self.createInitialReferences()
def _introduce(res):
d2 = self.bob.waitfor()
if self.debug: print "Alice introduces Carol to Bob"
# send the gift. This might not get acked by the time the test is
# done and everything is torn down, so explicitly silence any
# ConnectionDone error that might result. When we get
# callRemoteOnly(), use that instead.
d3 = self.abob.callRemote("set", obj=(self.alice, self.acarol))
d3.addErrback(ignoreConnectionDone)
return d2 # this fires with the gift that bob got
d.addCallback(_introduce)
def _bobGotCarol((balice,bcarol)):
if self.debug: print "Bob got Carol"
self.bcarol = bcarol
if self.debug: print "Bob says something to Carol"
d2 = self.carol.waitfor()
# handle ConnectionDone as described before
d3 = self.bcarol.callRemote("set", obj=12)
d3.addErrback(ignoreConnectionDone)
return d2
d.addCallback(_bobGotCarol)
def _carolCalled(res):
if self.debug: print "Carol heard from Bob"
self.failUnlessEqual(res, 12)
d.addCallback(_carolCalled)
return d
def testImplicitGift(self):
# in this test, Carol was registered in her Tub (using
# registerReference), but Cindy was not. Alice is given a reference
# to Carol, then uses that to get a reference to Cindy. Then Alice
# sends a message to Bob and includes a reference to Cindy. The test
# here is that we can make gifts out of references that were not
# passed to registerReference explicitly.
#defer.setDebugging(True)
self.createCharacters()
# the message from Alice to Bob will include a reference to Cindy
d = self.createInitialReferences()
def _tell_alice_about_cindy(res):
self.carol.obj = self.cindy
cindy_d = self.acarol.callRemote("get")
return cindy_d
d.addCallback(_tell_alice_about_cindy)
def _introduce(a_cindy):
# alice now has references to carol (self.acarol) and cindy
# (a_cindy). She sends both of them (plus a reference to herself)
# to bob.
d2 = self.bob.waitfor()
if self.debug: print "Alice introduces Carol to Bob"
# send the gift. This might not get acked by the time the test is
# done and everything is torn down, so explicitly silence any
# ConnectionDone error that might result. When we get
# callRemoteOnly(), use that instead.
d3 = self.abob.callRemote("set", obj=(self.alice,
self.acarol,
a_cindy))
d3.addErrback(ignoreConnectionDone)
return d2 # this fires with the gift that bob got
d.addCallback(_introduce)
def _bobGotCarol((b_alice,b_carol,b_cindy)):
if self.debug: print "Bob got Carol"
self.failUnless(b_alice)
self.failUnless(b_carol)
self.failUnless(b_cindy)
self.bcarol = b_carol
if self.debug: print "Bob says something to Carol"
d2 = self.carol.waitfor()
if self.debug: print "Bob says something to Cindy"
d3 = self.cindy.waitfor()
# handle ConnectionDone as described before
d4 = b_carol.callRemote("set", obj=4)
d4.addErrback(ignoreConnectionDone)
d5 = b_cindy.callRemote("set", obj=5)
d5.addErrback(ignoreConnectionDone)
return defer.DeferredList([d2,d3])
d.addCallback(_bobGotCarol)
def _carolAndCindyCalled(res):
if self.debug: print "Carol heard from Bob"
((carol_s, carol_result), (cindy_s, cindy_result)) = res
self.failUnless(carol_s)
self.failUnless(cindy_s)
self.failUnlessEqual(carol_result, 4)
self.failUnlessEqual(cindy_result, 5)
d.addCallback(_carolAndCindyCalled)
return d
def testOrdering(self):
self.createCharacters()
self.bob.calls = []
d = self.createInitialReferences()
def _introduce(res):
# we send three messages to Bob. The second one contains the
# reference to Carol.
dl = []
dl.append(self.abob.callRemote("append", obj=1))
dl.append(self.abob.callRemote("append", obj=self.acarol))
dl.append(self.abob.callRemote("append", obj=3))
return defer.DeferredList(dl)
d.addCallback(_introduce)
def _checkBob(res):
# this runs after all three messages have been acked by Bob
self.failUnlessEqual(len(self.bob.calls), 3)
self.failUnlessEqual(self.bob.calls[0], 1)
self.failUnless(isinstance(self.bob.calls[1], RemoteReference))
self.failUnlessEqual(self.bob.calls[2], 3)
d.addCallback(_checkBob)
return d
# this was used to alice's reference to carol (self.acarol) appeared in
# alice's gift table at the right time, to make sure that the
# RemoteReference is kept alive while the gift is in transit. The whole
# introduction pattern is going to change soon, so it has been disabled
# until I figure out what the new scheme ought to be asserting.
def OFF_bobGotCarol(self, (balice,bcarol)):
if self.debug: print "Bob got Carol"
# Bob has received the gift
self.bcarol = bcarol
# wait for alice to receive bob's 'decgift' sequence, which was sent
# by now (it is sent after bob receives the gift but before the
# gift-bearing message is delivered). To make sure alice has received
# it, send a message back along the same path.
def _check_alice(res):
if self.debug: print "Alice should have the decgift"
# alice's gift table should be empty
brokerAB = self.abob.tracker.broker
self.failUnlessEqual(brokerAB.myGifts, {})
self.failUnlessEqual(brokerAB.myGiftsByGiftID, {})
d1 = self.alice.waitfor()
d1.addCallback(_check_alice)
# the ack from this message doesn't always make it back by the time
# we end the test and hang up the connection. That connectionLost
# causes the deferred that this returns to errback, triggering an
# error, so we must be sure to discard any error from it. TODO: turn
# this into balice.callRemoteOnly("set", 39), which will have the
# same semantics from our point of view (but in addition it will tell
# the recipient to not bother sending a response).
balice.callRemote("set", 39).addErrback(lambda ignored: None)
if self.debug: print "Bob says something to Carol"
d2 = self.carol.waitfor()
d = self.bcarol.callRemote("set", obj=12)
d.addCallback(lambda res: d2)
d.addCallback(self._carolCalled)
d.addCallback(lambda res: d1)
return d

View File

@ -0,0 +1,297 @@
# -*- test-case-name: foolscap.test.test_interfaces -*-
from zope.interface import implementsOnly
from twisted.trial import unittest
from foolscap import schema, remoteinterface
from foolscap import RemoteInterface
from foolscap.remoteinterface import getRemoteInterface, RemoteMethodSchema
from foolscap.remoteinterface import RemoteInterfaceRegistry
from foolscap.tokens import Violation
from foolscap.referenceable import RemoteReference
from foolscap.test.common import TargetMixin
from foolscap.test.common import getRemoteInterfaceName, Target, RIMyTarget, \
RIMyTarget2, TargetWithoutInterfaces, IFoo, Foo, TypesTarget, RIDummy, \
DummyTarget
class Target2(Target):
implementsOnly(IFoo, RIMyTarget2)
class TestInterface(TargetMixin, unittest.TestCase):
def testTypes(self):
self.failUnless(isinstance(RIMyTarget,
remoteinterface.RemoteInterfaceClass))
self.failUnless(isinstance(RIMyTarget2,
remoteinterface.RemoteInterfaceClass))
def testRegister(self):
reg = RemoteInterfaceRegistry
self.failUnlessEqual(reg["RIMyTarget"], RIMyTarget)
self.failUnlessEqual(reg["RIMyTargetInterface2"], RIMyTarget2)
def testDuplicateRegistry(self):
try:
class RIMyTarget(RemoteInterface):
def foo(bar=int): return int
except remoteinterface.DuplicateRemoteInterfaceError:
pass
else:
self.fail("duplicate registration not caught")
def testInterface1(self):
# verify that we extract the right interfaces from a local object.
# also check that the registry stuff works.
self.setupBrokers()
rr, target = self.setupTarget(Target())
iface = getRemoteInterface(target)
self.failUnlessEqual(iface, RIMyTarget)
iname = getRemoteInterfaceName(target)
self.failUnlessEqual(iname, "RIMyTarget")
self.failUnlessIdentical(RemoteInterfaceRegistry["RIMyTarget"],
RIMyTarget)
rr, target = self.setupTarget(Target2())
iname = getRemoteInterfaceName(target)
self.failUnlessEqual(iname, "RIMyTargetInterface2")
self.failUnlessIdentical(\
RemoteInterfaceRegistry["RIMyTargetInterface2"], RIMyTarget2)
def testInterface2(self):
# verify that RemoteInterfaces have the right attributes
t = Target()
iface = getRemoteInterface(t)
self.failUnlessEqual(iface, RIMyTarget)
# 'add' is defined with 'def'
s1 = RIMyTarget['add']
self.failUnless(isinstance(s1, RemoteMethodSchema))
ok, s2 = s1.getKeywordArgConstraint("a")
self.failUnless(ok)
self.failUnless(isinstance(s2, schema.IntegerConstraint))
self.failUnless(s2.checkObject(12, False) == None)
self.failUnlessRaises(schema.Violation,
s2.checkObject, "string", False)
s3 = s1.getResponseConstraint()
self.failUnless(isinstance(s3, schema.IntegerConstraint))
# 'add1' is defined as a class attribute
s1 = RIMyTarget['add1']
self.failUnless(isinstance(s1, RemoteMethodSchema))
ok, s2 = s1.getKeywordArgConstraint("a")
self.failUnless(ok)
self.failUnless(isinstance(s2, schema.IntegerConstraint))
self.failUnless(s2.checkObject(12, False) == None)
self.failUnlessRaises(schema.Violation,
s2.checkObject, "string", False)
s3 = s1.getResponseConstraint()
self.failUnless(isinstance(s3, schema.IntegerConstraint))
s1 = RIMyTarget['join']
self.failUnless(isinstance(s1.getKeywordArgConstraint("a")[1],
schema.StringConstraint))
self.failUnless(isinstance(s1.getKeywordArgConstraint("c")[1],
schema.IntegerConstraint))
s3 = RIMyTarget['join'].getResponseConstraint()
self.failUnless(isinstance(s3, schema.StringConstraint))
s1 = RIMyTarget['disputed']
self.failUnless(isinstance(s1.getKeywordArgConstraint("a")[1],
schema.IntegerConstraint))
s3 = s1.getResponseConstraint()
self.failUnless(isinstance(s3, schema.IntegerConstraint))
def testInterface3(self):
t = TargetWithoutInterfaces()
iface = getRemoteInterface(t)
self.failIf(iface)
def testStack(self):
# when you violate your outbound schema, the Failure you get should
# have a stack trace that includes the actual callRemote invocation
self.setupBrokers()
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote('add', "not a number", "oops")
def _check_failure(f):
s = f.getTraceback().split("\n")
for i in range(len(s)):
line = s[i]
#print line
if ("test/test_interfaces.py" in line
and i+1 < len(s)
and "rr.callRemote" in s[i+1]):
return # all good
print "failure looked like this:"
print f
self.fail("didn't see invocation of callRemote in stacktrace")
d.addCallbacks(lambda res: self.fail("hey, this was supposed to fail"),
_check_failure)
return d
class Types(TargetMixin, unittest.TestCase):
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
def deferredShouldFail(self, d, ftype=None, checker=None):
if not ftype and not checker:
d.addCallbacks(lambda res:
self.fail("hey, this was supposed to fail"),
lambda f: None)
elif ftype and not checker:
d.addCallbacks(lambda res:
self.fail("hey, this was supposed to fail"),
lambda f: f.trap(ftype) or None)
else:
d.addCallbacks(lambda res:
self.fail("hey, this was supposed to fail"),
checker)
def testCall(self):
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote('add', 3, 4) # enforces schemas
d.addCallback(lambda res: self.failUnlessEqual(res, 7))
return d
def testFail(self):
# make sure exceptions (and thus CopiedFailures) pass a schema check
rr, target = self.setupTarget(Target(), True)
d = rr.callRemote('fail')
self.deferredShouldFail(d, ftype=ValueError)
return d
def testNoneGood(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('returns_none', True)
d.addCallback(lambda res: self.failUnlessEqual(res, None))
return d
def testNoneBad(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('returns_none', False)
def _check_failure(f):
f.trap(Violation)
self.failUnlessIn("(in return value of <foolscap.test.common.TypesTarget object", str(f))
self.failUnlessIn(">.returns_none", str(f))
self.failUnlessIn("'not None' is not None", str(f))
self.deferredShouldFail(d, checker=_check_failure)
return d
def testTakesRemoteInterfaceGood(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('takes_remoteinterface', DummyTarget())
d.addCallback(lambda res: self.failUnlessEqual(res, "good"))
return d
def testTakesRemoteInterfaceBad(self):
rr, target = self.setupTarget(TypesTarget(), True)
# takes_remoteinterface is specified to accept an RIDummy
d = rr.callRemote('takes_remoteinterface', 12)
def _check_failure(f):
f.trap(Violation)
self.failUnlessIn("RITypes.takes_remoteinterface(a=))", str(f))
self.failUnlessIn("'12' is not a Referenceable", str(f))
self.deferredShouldFail(d, checker=_check_failure)
return d
def testTakesRemoteInterfaceBad2(self):
rr, target = self.setupTarget(TypesTarget(), True)
# takes_remoteinterface is specified to accept an RIDummy
d = rr.callRemote('takes_remoteinterface', TypesTarget())
def _check_failure(f):
f.trap(Violation)
self.failUnlessIn("RITypes.takes_remoteinterface(a=))", str(f))
self.failUnlessIn(" does not provide RemoteInterface ", str(f))
self.failUnlessIn("foolscap.test.common.RIDummy", str(f))
self.deferredShouldFail(d, checker=_check_failure)
return d
def failUnlessRemoteProvides(self, obj, riface):
# TODO: really, I want to just be able to say:
# self.failUnless(RIDummy.providedBy(res))
iface = obj.tracker.interface
# TODO: this test probably doesn't handle subclasses of
# RemoteInterface, which might be useful (if it even works)
if not iface or iface != riface:
self.fail("%s does not provide RemoteInterface %s" % (obj, riface))
def testReturnsRemoteInterfaceGood(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('returns_remoteinterface', 1)
def _check(res):
self.failUnless(isinstance(res, RemoteReference))
#self.failUnless(RIDummy.providedBy(res))
self.failUnlessRemoteProvides(res, RIDummy)
d.addCallback(_check)
return d
def testReturnsRemoteInterfaceBad(self):
rr, target = self.setupTarget(TypesTarget(), True)
# returns_remoteinterface is specified to return an RIDummy
d = rr.callRemote('returns_remoteinterface', 0)
def _check_failure(f):
f.trap(Violation)
self.failUnlessIn("(in return value of <foolscap.test.common.TypesTarget object at ", str(f))
self.failUnlessIn(">.returns_remoteinterface)", str(f))
self.failUnlessIn("'15' is not a Referenceable", str(f))
self.deferredShouldFail(d, checker=_check_failure)
return d
def testReturnsRemoteInterfaceBad2(self):
rr, target = self.setupTarget(TypesTarget(), True)
# returns_remoteinterface is specified to return an RIDummy
d = rr.callRemote('returns_remoteinterface', -1)
def _check_failure(f):
f.trap(Violation)
self.failUnlessIn("(in return value of <foolscap.test.common.TypesTarget object at ", str(f))
self.failUnlessIn(">.returns_remoteinterface)", str(f))
self.failUnlessIn("<foolscap.test.common.TypesTarget object ",
str(f))
self.failUnlessIn(" does not provide RemoteInterface ", str(f))
self.failUnlessIn("foolscap.test.common.RIDummy", str(f))
self.deferredShouldFail(d, checker=_check_failure)
return d
class LocalTypes(TargetMixin, unittest.TestCase):
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
def testTakesInterfaceGood(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('takes_interface', DummyTarget())
d.addCallback(lambda res: self.failUnlessEqual(res, "good"))
return d
def testTakesInterfaceBad(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('takes_interface', Foo())
def _check_failure(f):
f.trap(Violation)
print f
self.deferredShouldFail(d, checker=_check_failure)
return d
def testReturnsInterfaceGood(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('returns_interface', True)
def _check(res):
#self.failUnless(isinstance(res, RemoteReference))
self.failUnless(IFoo.providedBy(res))
d.addCallback(_check)
return d
def testReturnsInterfaceBad(self):
rr, target = self.setupTarget(TypesTarget(), True)
d = rr.callRemote('returns_interface', False)
def _check_failure(f):
f.trap(Violation)
print f
self.deferredShouldFail(d, checker=_check_failure)
return d
del LocalTypes # TODO: how could these tests possibly work? we need Guards.

View File

@ -0,0 +1,149 @@
from twisted.trial import unittest
from twisted.internet import reactor, defer
from twisted.python.failure import Failure
from foolscap import Tub, UnauthenticatedTub, DeadReferenceError
from foolscap.broker import Broker
from foolscap.eventual import flushEventualQueue
from foolscap.test.common import TargetWithoutInterfaces
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
# we use authenticated tubs if possible. If crypto is not available, fall
# back to unauthenticated ones
GoodEnoughTub = UnauthenticatedTub
if crypto_available:
GoodEnoughTub = Tub
from twisted.python import log
class PingCountingBroker(Broker):
pings = 0
pongs = 0
def sendPING(self, number=0):
self.pings += 1
log.msg("PING: %d" % number)
Broker.sendPING(self, number)
def sendPONG(self, number):
self.pongs += 1
log.msg("PONG: %d" % number)
Broker.sendPONG(self, number)
class Keepalives(unittest.TestCase):
def setUp(self):
s0, s1 = self.services = [GoodEnoughTub(), GoodEnoughTub()]
s0.brokerClass = PingCountingBroker
s1.brokerClass = PingCountingBroker
s0.startService()
s1.startService()
l = s0.listenOn("tcp:0:interface=127.0.0.1")
s0.setLocation("127.0.0.1:%d" % l.getPortnum())
self.target = TargetWithoutInterfaces()
public_url = s0.registerReference(self.target, "target")
self.public_url = public_url
def tearDown(self):
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(flushEventualQueue)
return d
def getRef(self):
d = self.services[1].getReference(self.public_url)
return d
def stall(self, res, timeout):
d = defer.Deferred()
reactor.callLater(timeout, d.callback, res)
return d
def testSendPings(self):
# establish a connection with very short idle timers, to provoke
# plenty of PINGs and PONGs
self.services[0].setOption("keepaliveTimeout", 0.1)
self.services[1].setOption("keepaliveTimeout", 0.1)
# but we don't set disconnectTimeout, so we'll never
# actually drop the connection
d = self.getRef()
d.addCallback(self.stall, 2)
def _count_pings(rref):
b = rref.tracker.broker
# we're only watching one side here (the initiating side,
# services[0]). Either side could produce a PING that the other
# side responds to with a PONG, depending upon how the timers
# interleave. And a side that hears a PING will not bother to
# send a PING of its own. So only count the sum of the two kinds
# of messages. What I really care about is that the timers are
# restarted after the first timeout, so that more than one
# message per side is being generated. If we have no scheduling
# latency and high-resolution clocks, we expect to see about 10
# or 20 ping+pongs.
self.failUnless(b.pings + b.pongs > 4,
"b.pings=%d, b.pongs=%d" % (b.pings, b.pongs))
# and the connection should still be alive and usable
return rref.callRemote("add", 1, 2)
d.addCallback(_count_pings)
def _check_add(res):
self.failUnlessEqual(res, 3)
d.addCallback(_check_add)
return d
def do_testDisconnect(self, which):
# establish a connection with a very short disconnect timeout, so it
# will be abandoned. We only set this on one side, since either the
# initiating side or the receiving side should be able to timeout the
# connection. Because we don't set keepaliveTimeout, there will be no
# keepalives, so if we don't use the connection for 0.5 seconds, it
# will be dropped.
self.services[which].setOption("disconnectTimeout", 0.5)
d = self.getRef()
d.addCallback(self.stall, 2)
def _check_ref(rref):
d2 = rref.callRemote("add", 1, 2)
def _check(res):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(DeadReferenceError))
d2.addBoth(_check)
return d2
d.addCallback(_check_ref)
return d
def testDisconnect0(self):
return self.do_testDisconnect(0)
def testDisconnect1(self):
return self.do_testDisconnect(1)
def do_testNoDisconnect(self, which):
# establish a connection with a short disconnect timeout, but an even
# shorter keepalive timeout, so the connection should stay alive. We
# only provide the keepalives on one side, but enforce the disconnect
# timeout on both: just one side doing keepalives should keep the
# whole connection alive.
self.services[which].setOption("keepaliveTimeout", 0.1)
self.services[0].setOption("disconnectTimeout", 1.0)
self.services[1].setOption("disconnectTimeout", 1.0)
d = self.getRef()
d.addCallback(self.stall, 2)
def _check(rref):
# the connection should still be alive
return rref.callRemote("add", 1, 2)
d.addCallback(_check)
def _check_add(res):
self.failUnlessEqual(res, 3)
d.addCallback(_check_add)
return d
def testNoDisconnect0(self):
return self.do_testNoDisconnect(0)
def testNoDisconnect1(self):
return self.do_testNoDisconnect(1)

View File

@ -0,0 +1,82 @@
from twisted.trial import unittest
from twisted.internet import defer
import foolscap
from foolscap.test.common import HelperTarget
from foolscap.eventual import flushEventualQueue
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
class ConnectToSelf(unittest.TestCase):
def setUp(self):
self.services = []
def requireCrypto(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
def startTub(self, tub):
self.services = [tub]
for s in self.services:
s.startService()
l = s.listenOn("tcp:0:interface=127.0.0.1")
s.setLocation("127.0.0.1:%d" % l.getPortnum())
def tearDown(self):
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(flushEventualQueue)
return d
def testConnectUnauthenticated(self):
tub = foolscap.UnauthenticatedTub()
self.startTub(tub)
target = HelperTarget("bob")
target.obj = "unset"
url = tub.registerReference(target)
# can we connect to a reference on our own Tub?
d = tub.getReference(url)
def _connected(ref):
return ref.callRemote("set", 12)
d.addCallback(_connected)
def _check(res):
self.failUnlessEqual(target.obj, 12)
d.addCallback(_check)
def _connect_again(res):
target.obj = None
return tub.getReference(url)
d.addCallback(_connect_again)
d.addCallback(_connected)
d.addCallback(_check)
return d
def testConnectAuthenticated(self):
self.requireCrypto()
tub = foolscap.Tub()
self.startTub(tub)
target = HelperTarget("bob")
target.obj = "unset"
url = tub.registerReference(target)
# can we connect to a reference on our own Tub?
d = tub.getReference(url)
def _connected(ref):
return ref.callRemote("set", 12)
d.addCallback(_connected)
def _check(res):
self.failUnlessEqual(target.obj, 12)
d.addCallback(_check)
def _connect_again(res):
target.obj = None
return tub.getReference(url)
d.addCallback(_connect_again)
d.addCallback(_connected)
d.addCallback(_check)
return d

View File

@ -0,0 +1,930 @@
from twisted.trial import unittest
from twisted.internet import protocol, defer, reactor
from twisted.application import internet
from foolscap import pb, negotiate, tokens
from foolscap import Referenceable, Tub, UnauthenticatedTub, BananaError
from foolscap.eventual import flushEventualQueue
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
# we use authenticated tubs if possible. If crypto is not available, fall
# back to unauthenticated ones
GoodEnoughTub = UnauthenticatedTub
if crypto_available:
GoodEnoughTub = Tub
# this is tubID 3hemthez7rvgvyhjx2n5kdj7mcyar3yt
certData_low = \
"""-----BEGIN CERTIFICATE-----
MIIBnjCCAQcCAgCEMA0GCSqGSIb3DQEBBAUAMBcxFTATBgNVBAMUDG5ld3BiX3Ro
aW5neTAeFw0wNjExMjYxODUxMTBaFw0wNzExMjYxODUxMTBaMBcxFTATBgNVBAMU
DG5ld3BiX3RoaW5neTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA1DuK9NoF
fiSreA8rVqYPAjNiUqFelAAYPgnJR92Jry1J/dPA3ieNcCazbjVeKUFjd6+C30XR
APhajsAJFiJdnmgrtVILNrpZDC/vISKQoAmoT9hP/cMqFm8vmUG/+AXO76q63vfH
UmabBVDNTlM8FJpbm9M26cFMrH45G840gA0CAwEAATANBgkqhkiG9w0BAQQFAAOB
gQBCtjgBbF/s4w/16Y15lkTAO0xt8ZbtrvcsFPGTXeporonejnNaJ/aDbJt8Y6nY
ypJ4+LTT3UQwwvqX5xEuJmFhmXGsghRGypbU7Zxw6QZRppBRqz8xMS+y82mMZRQp
ezP+BiTvnoWXzDEP1233oYuELVgOVnHsj+rC017Ykfd7fw==
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQDUO4r02gV+JKt4DytWpg8CM2JSoV6UABg+CclH3YmvLUn908De
J41wJrNuNV4pQWN3r4LfRdEA+FqOwAkWIl2eaCu1Ugs2ulkML+8hIpCgCahP2E/9
wyoWby+ZQb/4Bc7vqrre98dSZpsFUM1OUzwUmlub0zbpwUysfjkbzjSADQIDAQAB
AoGBAIvxTykw8dpBt8cMyZjzGoZq93Rg74pLnbCap1x52iXmiRmUHWLfVcYT3tDW
4+X0NfBfjL5IvQ4UtTHXsqYjtvJfXWazYYa4INv5wKDBCd5a7s1YQ8R7mnhlBbRd
nqZ6RpGuQbd3gTGZCkUdbHPSqdCPAjryH9mtWoQZIepcIcoJAkEA77gjO+MPID6v
K6lf8SuFXHDOpaNOAiMlxVnmyQYQoF0PRVSpKOQf83An7R0S/jN3C7eZ6fPbZcyK
SFVktHhYwwJBAOKlgndbSkVzkQCMcuErGZT1AxHNNHSaDo8X3C47UbP3nf60SkxI
boqmpuPvEPUB9iPQdiNZGDU04+FUhe5Vtu8CQHDQHXS/hIzOMy2/BfG/Y4F/bSCy
W7HRzKK1jlCoVAbEBL3B++HMieTMsV17Q0bx/WI8Q2jAZE3iFmm4Fi6APHUCQCMi
5Yb7cBg0QlaDb4vY0q51DXTFC0zIVVl5qXjBWXk8+hFygdIxqHF2RIkxlr9k/nOu
7aGtPkOBX5KfN+QrBaECQQCltPE9YjFoqPezfyvGZoWAKb8bWzo958U3uVBnCw2f
Fs8AQDgI/9gOUXxXno51xQSdCnJLQJ8lThRUa6M7/F1B
-----END RSA PRIVATE KEY-----
"""
# this is tubID 6cxxohyb5ysw6ftpwprbzffxrghbfopm
certData_high = \
"""-----BEGIN CERTIFICATE-----
MIIBnjCCAQcCAgCEMA0GCSqGSIb3DQEBBAUAMBcxFTATBgNVBAMUDG5ld3BiX3Ro
aW5neTAeFw0wNjExMjYxODUxNDFaFw0wNzExMjYxODUxNDFaMBcxFTATBgNVBAMU
DG5ld3BiX3RoaW5neTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEArfrebvt3
8FE3kKoscY2J/8A4J6CUUUiM7/gl00UvGvvjfdaWbsj4w0o8W2tE0X8Zce3dScSl
D6qVXy6AEc4Flqs0q02w9uNzcdDY6LF3NiK0Lq+JP4OjJeImUBe8wUU0RQxqf/oA
GhgHEZhTp6aAdxBXZFOVDloiW6iqrKH/thcCAwEAATANBgkqhkiG9w0BAQQFAAOB
gQBXi+edp3iz07wxcRztvXtTAjY/9gUwlfa6qSTg/cGqbF0OPa+sISBOFRnnC8qM
ENexlkpiiD4Oyj+UtO5g2CMz0E62cTJTqz6PfexnmKIGwYjq5wZ2tzOrB9AmAzLv
TQQ9CdcKBXLd2GCToh8hBvjyyFwj+yTSbq+VKLMFkBY8Rg==
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIICXgIBAAKBgQCt+t5u+3fwUTeQqixxjYn/wDgnoJRRSIzv+CXTRS8a++N91pZu
yPjDSjxba0TRfxlx7d1JxKUPqpVfLoARzgWWqzSrTbD243Nx0NjosXc2IrQur4k/
g6Ml4iZQF7zBRTRFDGp/+gAaGAcRmFOnpoB3EFdkU5UOWiJbqKqsof+2FwIDAQAB
AoGBAKrU3Vp+Y2u+Y+ARqKgrQai1tq36eAhEQ9dRgtqrYTCOyvcCIR5RCirAFvnx
H1bSBUsgNBw+EZGLfzZBs5FICaUjBOQYBYzfxux6+jlGvdl7idfHs7zogyEYBqye
0VkwzZ0mVXM2ujOD/z/ANkdEn2fGj/VwAYDlfvlyNZMckHp5AkEA5sc1VG3snWmG
lz4967MMzJ7XNpZcTvLEspjpH7hFbnXUHIQ4wPYOP7dhnVvKX1FiOQ8+zXVYDDGB
SK1ABzpc+wJBAMD+imwAhHNBbOb3cPYzOz6XRZaetvep3GfE2wKr1HXP8wchNXWj
Ijq6fJinwPlDugHaeNnfb+Dydd+YEiDTSJUCQDGCk2Jlotmyhfl0lPw4EYrkmO9R
GsSlOKXIQFtZwSuNg9AKXdKn9y6cPQjxZF1GrHfpWWPixNz40e+xm4bxcnkCQQCs
+zkspqYQ/CJVPpHkSnUem83GvAl5IKmp5Nr8oPD0i+fjixN0ljyW8RG+bhXcFaVC
BgTuG4QW1ptqRs5w14+lAkEAuAisTPUDsoUczywyoBbcFo3SVpFPNeumEXrj4MD/
uP+TxgBi/hNYaR18mTbKD4mzVSjqyEeRC/emV3xUpUrdqg==
-----END RSA PRIVATE KEY-----
"""
class Target(Referenceable):
def __init__(self):
self.calls = 0
def remote_call(self):
self.calls += 1
def getPage(url):
"""This is a variant of the standard twisted.web.client.getPage, which is
smart enough to shut off its connection when its done (even if it fails).
"""
from twisted.web import client
scheme, host, port, path = client._parse(url)
factory = client.HTTPClientFactory(url)
c = reactor.connectTCP(host, port, factory)
def shutdown(res, c):
c.disconnect()
return res
factory.deferred.addBoth(shutdown, c)
return factory.deferred
class OneTimeDeferred(defer.Deferred):
def callback(self, res):
if self.called:
return
return defer.Deferred.callback(self, res)
class BaseMixin:
def requireCrypto(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
def setUp(self):
self.connections = []
self.servers = []
self.services = []
def tearDown(self):
for c in self.connections:
if c.transport:
c.transport.loseConnection()
dl = []
for s in self.servers:
dl.append(defer.maybeDeferred(s.stopListening))
for s in self.services:
dl.append(defer.maybeDeferred(s.stopService))
d = defer.DeferredList(dl)
d.addCallback(self._checkListeners)
d.addCallback(flushEventualQueue)
return d
def _checkListeners(self, res):
self.failIf(pb.Listeners)
def stall(self, res, timeout):
d = defer.Deferred()
reactor.callLater(timeout, d.callback, res)
return d
def makeServer(self, authenticated, options={}, listenerOptions={}):
if authenticated:
self.tub = tub = Tub(options=options)
else:
self.tub = tub = UnauthenticatedTub(options=options)
tub.startService()
self.services.append(tub)
l = tub.listenOn("tcp:0", listenerOptions)
tub.setLocation("127.0.0.1:%d" % l.getPortnum())
self.target = Target()
return tub.registerReference(self.target), l.getPortnum()
def makeSpecificServer(self, certData,
negotiationClass=negotiate.Negotiation):
self.tub = tub = Tub(certData=certData)
tub.negotiationClass = negotiationClass
tub.startService()
self.services.append(tub)
l = tub.listenOn("tcp:0")
tub.setLocation("127.0.0.1:%d" % l.getPortnum())
self.target = Target()
return tub.registerReference(self.target), l.getPortnum()
def makeNullServer(self):
f = protocol.Factory()
f.protocol = protocol.Protocol # discards everything
s = internet.TCPServer(0, f)
s.startService()
self.services.append(s)
portnum = s._port.getHost().port
return portnum
def makeHTTPServer(self):
try:
from twisted.web import server, resource, static
except ImportError:
raise unittest.SkipTest('this test needs twisted.web')
root = resource.Resource()
root.putChild("", static.Data("hello\n", "text/plain"))
s = internet.TCPServer(0, server.Site(root))
s.startService()
self.services.append(s)
portnum = s._port.getHost().port
return portnum
def connectClient(self, portnum):
tub = UnauthenticatedTub()
tub.startService()
self.services.append(tub)
d = tub.getReference("pb://127.0.0.1:%d/hello" % portnum)
return d
def connectHTTPClient(self, portnum):
return getPage("http://127.0.0.1:%d/foo" % portnum)
class Basic(BaseMixin, unittest.TestCase):
def testOptions(self):
url, portnum = self.makeServer(False, {'opt': 12})
self.failUnlessEqual(self.tub.options['opt'], 12)
def testAuthenticated(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
url, portnum = self.makeServer(True)
client = Tub()
client.startService()
self.services.append(client)
d = client.getReference(url)
return d
testAuthenticated.timeout = 10
def testUnauthenticated(self):
url, portnum = self.makeServer(False)
client = UnauthenticatedTub()
client.startService()
self.services.append(client)
d = client.getReference(url)
return d
testUnauthenticated.timeout = 10
def testHalfAuthenticated1(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
url, portnum = self.makeServer(True)
client = UnauthenticatedTub()
client.startService()
self.services.append(client)
d = client.getReference(url)
return d
testHalfAuthenticated1.timeout = 10
def testHalfAuthenticated2(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
url, portnum = self.makeServer(False)
client = Tub()
client.startService()
self.services.append(client)
d = client.getReference(url)
return d
testHalfAuthenticated2.timeout = 10
class Versus(BaseMixin, unittest.TestCase):
def testVersusHTTPServerAuthenticated(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
portnum = self.makeHTTPServer()
client = Tub()
client.startService()
self.services.append(client)
url = "pb://1234@127.0.0.1:%d/target" % portnum
d = client.getReference(url)
d.addCallbacks(lambda res: self.fail("this is supposed to fail"),
lambda f: f.trap(BananaError))
# the HTTP server needs a moment to notice that the connection has
# gone away. Without this, trial flunks the test because of the
# leftover HTTP server socket.
d.addCallback(self.stall, 1)
return d
testVersusHTTPServerAuthenticated.timeout = 10
def testVersusHTTPServerUnauthenticated(self):
portnum = self.makeHTTPServer()
client = UnauthenticatedTub()
client.startService()
self.services.append(client)
url = "pbu://127.0.0.1:%d/target" % portnum
d = client.getReference(url)
d.addCallbacks(lambda res: self.fail("this is supposed to fail"),
lambda f: f.trap(BananaError))
d.addCallback(self.stall, 1) # same reason as above
return d
testVersusHTTPServerUnauthenticated.timeout = 10
def testVersusHTTPClientUnauthenticated(self):
try:
from twisted.web import error
except ImportError:
raise unittest.SkipTest('this test needs twisted.web')
url, portnum = self.makeServer(False)
d = self.connectHTTPClient(portnum)
d.addCallbacks(lambda res: self.fail("this is supposed to fail"),
lambda f: f.trap(error.Error))
return d
testVersusHTTPClientUnauthenticated.timeout = 10
def testVersusHTTPClientAuthenticated(self):
if not crypto_available:
raise unittest.SkipTest("crypto not available")
try:
from twisted.web import error
except ImportError:
raise unittest.SkipTest('this test needs twisted.web')
url, portnum = self.makeServer(True)
d = self.connectHTTPClient(portnum)
d.addCallbacks(lambda res: self.fail("this is supposed to fail"),
lambda f: f.trap(error.Error))
return d
testVersusHTTPClientAuthenticated.timeout = 10
def testNoConnection(self):
url, portnum = self.makeServer(False)
d = self.tub.stopService()
d.addCallback(self._testNoConnection_1, url)
return d
testNoConnection.timeout = 10
def _testNoConnection_1(self, res, url):
self.services.remove(self.tub)
client = UnauthenticatedTub()
client.startService()
self.services.append(client)
d = client.getReference(url)
d.addCallbacks(lambda res: self.fail("this is supposed to fail"),
self._testNoConnection_fail)
return d
def _testNoConnection_fail(self, why):
from twisted.internet import error
self.failUnless(why.check(error.ConnectionRefusedError))
def testClientTimeout(self):
portnum = self.makeNullServer()
# lower the connection timeout to 2 seconds
client = UnauthenticatedTub(options={'connect_timeout': 1})
client.startService()
self.services.append(client)
url = "pbu://127.0.0.1:%d/target" % portnum
d = client.getReference(url)
d.addCallbacks(lambda res: self.fail("hey! this is supposed to fail"),
lambda f: f.trap(tokens.NegotiationError))
return d
testClientTimeout.timeout = 10
def testServerTimeout(self):
# lower the connection timeout to 1 seconds
# the debug callback gets fired each time Negotiate.negotiationFailed
# is fired, which happens twice (once for the timeout, once for the
# resulting connectionLost), so we have to make sure the Deferred is
# only fired once.
d = OneTimeDeferred()
options = {'server_timeout': 1,
'debug_negotiationFailed_cb': d.callback
}
url, portnum = self.makeServer(False, listenerOptions=options)
f = protocol.ClientFactory()
f.protocol = protocol.Protocol # discards everything
s = internet.TCPClient("127.0.0.1", portnum, f)
s.startService()
self.services.append(s)
d.addCallbacks(lambda res: self.fail("hey! this is supposed to fail"),
lambda f: self._testServerTimeout_1)
return d
testServerTimeout.timeout = 10
def _testServerTimeout_1(self, f):
self.failUnless(f.check(tokens.NegotiationError))
self.failUnlessEqual(f.value.args[0], "negotiation timeout")
class Parallel(BaseMixin, unittest.TestCase):
# testParallel*: listen on two separate ports, set up a URL with both
# ports in the locationHints field, the connect. PB is supposed to
# connect to both ports at the same time, using whichever one completes
# negotiation first. The other connection is supposed to be dropped
# silently.
# the cases we need to cover are enumerated by the possible states that
# connection[1] can be in when connection[0] (the winning connection)
# completes negotiation. Those states are:
# 1: connectTCP initiated and failed
# 2: connectTCP initiated, but not yet established
# 3: connection established, but still in the PLAINTEXT phase
# (sent GET, waiting for the 101 Switching Protocols)
# 4: still in ENCRYPTED phase: sent Hello, waiting for their Hello
# 5: in DECIDING phase (non-master), waiting for their decision
#
def makeServers(self, tubopts={}, lo1={}, lo2={}):
self.requireCrypto()
self.tub = tub = Tub(options=tubopts)
tub.startService()
self.services.append(tub)
l1 = tub.listenOn("tcp:0", lo1)
l2 = tub.listenOn("tcp:0", lo2)
self.p1, self.p2 = l1.getPortnum(), l2.getPortnum()
tub.setLocation("127.0.0.1:%d" % l1.getPortnum(),
"127.0.0.1:%d" % l2.getPortnum())
self.target = Target()
return tub.registerReference(self.target)
def connect(self, url, authenticated=True):
self.clientPhases = []
opts = {"debug_stall_second_connection": True,
"debug_gatherPhases": self.clientPhases}
if authenticated:
self.client = client = Tub(options=opts)
else:
self.client = client = UnauthenticatedTub(options=opts)
client.startService()
self.services.append(client)
d = client.getReference(url)
return d
def checkConnectedToFirstListener(self, rr, targetPhases):
# verify that we connected to the first listener, and not the second
self.failUnlessEqual(rr.tracker.broker.transport.getPeer().port,
self.p1)
# then pause a moment for the other connection to finish giving up
d = self.stall(rr, 0.5)
# and verify that we finished during the phase that we meant to test
d.addCallback(lambda res:
self.failUnlessEqual(self.clientPhases, targetPhases,
"negotiation was abandoned in "
"the wrong phase"))
return d
def test1(self):
# in this test, we stop listening on the second port, so the second
# connection will terminate with an ECONNREFUSED before the first one
# completes. We also slow down the first port so we're sure to
# recognize the failed second connection before starting negotiation
# on the first.
url = self.makeServers(lo1={'debug_slow_connectionMade': True})
d = self.tub.stopListeningOn(self.tub.getListeners()[1])
d.addCallback(self._test1_1, url)
return d
def _test1_1(self, res, url):
d = self.connect(url)
d.addCallback(self.checkConnectedToFirstListener, [])
#d.addCallback(self.stall, 1)
return d
test1.timeout = 10
def test2(self):
# slow down the second listener so that the first one is used. The
# second listener will be connected but it will not respond to
# negotiation for a moment, allowing the first connection to
# complete.
url = self.makeServers(lo2={'debug_slow_connectionMade': True})
d = self.connect(url)
d.addCallback(self.checkConnectedToFirstListener,
[negotiate.PLAINTEXT])
#d.addCallback(self.stall, 1)
return d
test2.timeout = 10
def test3(self):
# have the second listener stall just before it does
# sendPlaintextServer(). This insures the second connection will be
# waiting in the PLAINTEXT phase when the first connection completes.
url = self.makeServers(lo2={'debug_slow_sendPlaintextServer': True})
d = self.connect(url)
d.addCallback(self.checkConnectedToFirstListener,
[negotiate.PLAINTEXT])
return d
test3.timeout = 10
def test4(self):
# stall the second listener just before it sends the Hello.
# This insures the second connection will be waiting in the ENCRYPTED
# phase when the first connection completes.
url = self.makeServers(lo2={'debug_slow_sendHello': True})
d = self.connect(url)
d.addCallback(self.checkConnectedToFirstListener,
[negotiate.ENCRYPTED])
#d.addCallback(self.stall, 1)
return d
test4.timeout = 10
def test5(self):
# stall the second listener just before it sends the decision. This
# insures the second connection will be waiting in the DECIDING phase
# when the first connection completes.
# note: this requires that the listener winds up as the master. We
# force this by connecting from an unauthenticated Tub.
url = self.makeServers(lo2={'debug_slow_sendDecision': True})
d = self.connect(url, authenticated=False)
d.addCallback(self.checkConnectedToFirstListener,
[negotiate.DECIDING])
return d
test5.timeout = 10
class CrossfireMixin(BaseMixin):
# testSimultaneous*: similar to Parallel, but connection[0] is initiated
# in the opposite direction. This is the case when two Tubs initiate
# connections to each other at the same time.
tub1IsMaster = False
def makeServers(self, t1opts={}, t2opts={}, lo1={}, lo2={},
tubAauthenticated=True, tubBauthenticated=True):
if tubAauthenticated or tubBauthenticated:
self.requireCrypto()
# first we create two Tubs
if tubAauthenticated:
a = Tub(options=t1opts)
else:
a = UnauthenticatedTub(options=t1opts)
if tubBauthenticated:
b = Tub(options=t1opts)
else:
b = UnauthenticatedTub(options=t1opts)
# then we figure out which one will be the master, and call it tub1
if a.tubID > b.tubID:
# a is the master
tub1,tub2 = a,b
else:
tub1,tub2 = b,a
if not self.tub1IsMaster:
tub1,tub2 = tub2,tub1
self.tub1 = tub1
self.tub2 = tub2
# now fix up the options and everything else
self.tub1phases = []
t1opts['debug_gatherPhases'] = self.tub1phases
tub1.options = t1opts
self.tub2phases = []
t2opts['debug_gatherPhases'] = self.tub2phases
tub2.options = t2opts
# connection[0], the winning connection, will be from tub1 to tub2
tub1.startService()
self.services.append(tub1)
l1 = tub1.listenOn("tcp:0", lo1)
tub1.setLocation("127.0.0.1:%d" % l1.getPortnum())
self.target1 = Target()
self.url1 = tub1.registerReference(self.target1)
# connection[1], the abandoned connection, will be from tub2 to tub1
tub2.startService()
self.services.append(tub2)
l2 = tub2.listenOn("tcp:0", lo2)
tub2.setLocation("127.0.0.1:%d" % l2.getPortnum())
self.target2 = Target()
self.url2 = tub2.registerReference(self.target2)
def connect(self):
# initiate connection[1] from tub2 to tub1, which will stall (but the
# actual getReference will eventually succeed once the
# reverse-direction connection is established)
d1 = self.tub2.getReference(self.url1)
# give it a moment to get to the point where it stalls
d = self.stall(None, 0.1)
d.addCallback(self._connect, d1)
return d, d1
def _connect(self, res, d1):
# now initiate connection[0], from tub1 to tub2
d2 = self.tub1.getReference(self.url2)
return d2
def checkConnectedViaReverse(self, rref, targetPhases):
# assert that connection[0] (from tub1 to tub2) is actually in use.
# This connection uses a per-client allocated port number for the
# tub1 side, and the tub2 Listener's port for the tub2 side.
# Therefore tub1's Broker (as used by its RemoteReference) will have
# a far-end port number that should match tub2's Listener.
self.failUnlessEqual(rref.tracker.broker.transport.getPeer().port,
self.tub2.getListeners()[0].getPortnum())
# in addition, connection[1] should have been abandoned during a
# specific phase.
self.failUnlessEqual(self.tub2phases, targetPhases)
class CrossfireReverse(CrossfireMixin, unittest.TestCase):
# just like the following Crossfire except that tub2 is the master, just
# in case it makes a difference somewhere
tub1IsMaster = False
def test1(self):
# in this test, tub2 isn't listening at all. So not only will
# connection[1] fail, the tub2.getReference that uses it will fail
# too (whereas in all other tests, connection[1] is abandoned but
# tub2.getReference succeeds)
self.makeServers(lo1={'debug_slow_connectionMade': True})
d = self.tub2.stopListeningOn(self.tub2.getListeners()[0])
d.addCallback(self._test1_1)
return d
def _test1_1(self, res):
d,d1 = self.connect()
d.addCallbacks(lambda res: self.fail("hey! this is supposed to fail"),
self._test1_2, errbackArgs=(d1,))
return d
def _test1_2(self, why, d1):
from twisted.internet import error
self.failUnless(why.check(error.ConnectionRefusedError))
# but now the other getReference should succeed
return d1
test1.timeout = 10
def test2(self):
self.makeServers(lo1={'debug_slow_connectionMade': True})
d,d1 = self.connect()
d.addCallback(self.checkConnectedViaReverse, [negotiate.PLAINTEXT])
d.addCallback(lambda res: d1) # other getReference should work too
return d
test2.timeout = 10
def test3(self):
self.makeServers(lo1={'debug_slow_sendPlaintextServer': True})
d,d1 = self.connect()
d.addCallback(self.checkConnectedViaReverse, [negotiate.PLAINTEXT])
d.addCallback(lambda res: d1) # other getReference should work too
return d
test3.timeout = 10
def test4(self):
self.makeServers(lo1={'debug_slow_sendHello': True})
d,d1 = self.connect()
d.addCallback(self.checkConnectedViaReverse, [negotiate.ENCRYPTED])
d.addCallback(lambda res: d1) # other getReference should work too
return d
test4.timeout = 10
class Crossfire(CrossfireReverse):
tub1IsMaster = True
def test5(self):
# this is the only test where we rely upon the fact that
# makeServers() always puts the higher-numbered Tub (which will be
# the master) in self.tub1
# connection[1] (the abandoned connection) is started from tub2 to
# tub1. It connects, begins negotiation (tub1 is the master), but
# then is stalled because we've added the debug_slow_sendDecision
# flag to tub1's Listener. That allows connection[0] to begin from
# tub1 to tub2, which is *not* stalled (because we added the slowdown
# flag to the Listener's options, not tub1.options), so it completes
# normally. When connection[1] is unpaused and hits switchToBanana,
# it discovers that it already has a Broker in place, and the
# connection is abandoned.
self.makeServers(lo1={'debug_slow_sendDecision': True})
d,d1 = self.connect()
d.addCallback(self.checkConnectedViaReverse, [negotiate.DECIDING])
d.addCallback(lambda res: d1) # other getReference should work too
return d
test5.timeout = 10
# TODO: some of these tests cause the TLS connection to be abandoned, and it
# looks like TLS sockets don't shut down very cleanly. I connectionLost
# getting called with the following error (instead of a normal ConnectionDone
# exception):
# 2005/10/10 19:56 PDT [Negotiation,0,127.0.0.1]
# Negotiation.negotiationFailed: [Failure instance: Traceback:
# exceptions.AttributeError: TLSConnection instance has no attribute 'socket'
# twisted/internet/tcp.py:402:connectionLost
# twisted/pb/negotiate.py:366:connectionLost
# twisted/pb/negotiate.py:205:debug_forceTimer
# twisted/pb/negotiate.py:223:debug_fireTimer
# --- <exception caught here> ---
# twisted/pb/negotiate.py:324:dataReceived
# twisted/pb/negotiate.py:432:handlePLAINTEXTServer
# twisted/pb/negotiate.py:457:sendPlaintextServerAndStartENCRYPTED
# twisted/pb/negotiate.py:494:startENCRYPTED
# twisted/pb/negotiate.py:768:startTLS
# twisted/internet/tcp.py:693:startTLS
# twisted/internet/tcp.py:314:startTLS
# ]
#
# specifically, I saw this happen for CrossfireReverse.test2, Parallel.test2
# other tests don't do quite what I want: closing a connection (say, due to a
# duplicate broker) should send a sensible error message to the other side,
# rather than triggering a low-level protocol error.
class Existing(CrossfireMixin, unittest.TestCase):
def checkNumBrokers(self, res, expected, dummy):
if type(expected) not in (tuple,list):
expected = [expected]
self.failUnless(len(self.tub1.brokers) +
len(self.tub1.unauthenticatedBrokers) in expected)
self.failUnless(len(self.tub2.brokers) +
len(self.tub2.unauthenticatedBrokers) in expected)
def testAuthenticated(self):
# When two authenticated Tubs connect, that connection should be used
# in the reverse connection too
self.makeServers()
d = self.tub1.getReference(self.url2)
d.addCallback(self._testAuthenticated_1)
return d
def _testAuthenticated_1(self, r12):
# this should use the existing connection
d = self.tub2.getReference(self.url1)
d.addCallback(self.checkNumBrokers, 1, (r12,))
return d
def testUnauthenticated(self):
# But when two non-authenticated Tubs connect, they don't get to
# share connections.
self.makeServers(tubAauthenticated=False, tubBauthenticated=False)
# the non-authenticated Tub gets a tubID of None, so it becomes tub2.
# We want to verify that connections are not shared regardless of
# which direction is authenticated. In this test, the first
# connection
d = self.tub1.getReference(self.url2)
d.addCallback(self._testUnauthenticated_1)
return d
def _testUnauthenticated_1(self, r12):
# this should *not* use the existing connection
d = self.tub2.getReference(self.url1)
d.addCallback(self.checkNumBrokers, 2, (r12,))
return d
def testHalfAuthenticated1(self):
# When an authenticated Tub connects to a non-authenticated Tub, the
# reverse connection *is* allowed to share the connection (although,
# due to what I think are limitations in SSL, it probably won't)
self.makeServers(tubAauthenticated=True, tubBauthenticated=False)
# The non-authenticated Tub gets a tubID of None, so it becomes tub2.
# Therefore this is the authenticated-to-non-authenticated
# connection.
d = self.tub1.getReference(self.url2)
d.addCallback(self._testHalfAuthenticated1_1)
return d
def _testHalfAuthenticated1_1(self, r12):
d = self.tub2.getReference(self.url1)
d.addCallback(self.checkNumBrokers, (1,2), (r12,))
return d
def testHalfAuthenticated2(self):
# On the other hand, when a non-authenticated Tub connects to an
# authenticated Tub, the reverse connection is forbidden (because the
# non-authenticated Tub's identity is based upon its Listener's
# location)
self.makeServers(tubAauthenticated=True, tubBauthenticated=False)
# The non-authenticated Tub gets a tubID of None, so it becomes tub2.
# Therefore this is the authenticated-to-non-authenticated
# connection.
d = self.tub2.getReference(self.url1)
d.addCallback(self._testHalfAuthenticated2_1)
return d
def _testHalfAuthenticated2_1(self, r21):
d = self.tub1.getReference(self.url2)
d.addCallback(self.checkNumBrokers, 2, (r21,))
return d
# this test will have to change when the regular Negotiation starts using
# different decision blocks. The version numbers must be updated each time
# the negotiation version is changed.
assert negotiate.Negotiation.maxVersion == 3
MAX_HANDLED_VERSION = negotiate.Negotiation.maxVersion
UNHANDLED_VERSION = 4
class NegotiationVbig(negotiate.Negotiation):
maxVersion = UNHANDLED_VERSION
def __init__(self):
negotiate.Negotiation.__init__(self)
self.negotiationOffer["extra"] = "new value"
def evaluateNegotiationVersion4(self, offer):
# just like v1, but different
return self.evaluateNegotiationVersion1(offer)
def acceptDecisionVersion4(self, decision):
return self.acceptDecisionVersion1(decision)
class NegotiationVbigOnly(NegotiationVbig):
minVersion = UNHANDLED_VERSION
class Future(BaseMixin, unittest.TestCase):
def testFuture1(self):
# when a peer that understands version=[1] that connects to a peer
# that understands version=[1,2], they should pick version=1
# the listening Tub will have the higher tubID, and thus make the
# negotiation decision
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_high)
# the client
client = Tub(certData=certData_low)
client.negotiationClass = NegotiationVbig
client.startService()
self.services.append(client)
d = client.getReference(url)
def _check_version(rref):
ver = rref.tracker.broker._banana_decision_version
self.failUnlessEqual(ver, MAX_HANDLED_VERSION)
d.addCallback(_check_version)
return d
testFuture1.timeout = 10
def testFuture2(self):
# same as before, but the connecting Tub will have the higher tubID,
# and thus make the negotiation decision
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_low)
# the client
client = Tub(certData=certData_high)
client.negotiationClass = NegotiationVbig
client.startService()
self.services.append(client)
d = client.getReference(url)
def _check_version(rref):
ver = rref.tracker.broker._banana_decision_version
self.failUnlessEqual(ver, MAX_HANDLED_VERSION)
d.addCallback(_check_version)
return d
testFuture2.timeout = 10
def testFuture3(self):
# same as testFuture1, but it is the listening server that
# understands [1,2]
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_high, NegotiationVbig)
client = Tub(certData=certData_low)
client.startService()
self.services.append(client)
d = client.getReference(url)
def _check_version(rref):
ver = rref.tracker.broker._banana_decision_version
self.failUnlessEqual(ver, MAX_HANDLED_VERSION)
d.addCallback(_check_version)
return d
testFuture3.timeout = 10
def testFuture4(self):
# same as testFuture2, but it is the listening server that
# understands [1,2]
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_low, NegotiationVbig)
# the client
client = Tub(certData=certData_high)
client.startService()
self.services.append(client)
d = client.getReference(url)
def _check_version(rref):
ver = rref.tracker.broker._banana_decision_version
self.failUnlessEqual(ver, MAX_HANDLED_VERSION)
d.addCallback(_check_version)
return d
testFuture4.timeout = 10
def testTooFarInFuture1(self):
# when a peer that understands version=[1] that connects to a peer
# that only understands version=[2], they should fail to negotiate
# the listening Tub will have the higher tubID, and thus make the
# negotiation decision
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_high)
# the client
client = Tub(certData=certData_low)
client.negotiationClass = NegotiationVbigOnly
client.startService()
self.services.append(client)
d = client.getReference(url)
def _oops_succeeded(rref):
self.fail("hey! this is supposed to fail")
def _check_failure(f):
f.trap(tokens.NegotiationError)
d.addCallbacks(_oops_succeeded, _check_failure)
return d
testTooFarInFuture1.timeout = 10
def testTooFarInFuture2(self):
# same as before, but the connecting Tub will have the higher tubID,
# and thus make the negotiation decision
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_low)
client = Tub(certData=certData_high)
client.negotiationClass = NegotiationVbigOnly
client.startService()
self.services.append(client)
d = client.getReference(url)
def _oops_succeeded(rref):
self.fail("hey! this is supposed to fail")
def _check_failure(f):
f.trap(tokens.NegotiationError)
d.addCallbacks(_oops_succeeded, _check_failure)
return d
testTooFarInFuture1.timeout = 10
def testTooFarInFuture3(self):
# same as testTooFarInFuture1, but it is the listening server which
# only understands [2]
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_high,
NegotiationVbigOnly)
client = Tub(certData=certData_low)
client.startService()
self.services.append(client)
d = client.getReference(url)
def _oops_succeeded(rref):
self.fail("hey! this is supposed to fail")
def _check_failure(f):
f.trap(tokens.NegotiationError)
d.addCallbacks(_oops_succeeded, _check_failure)
return d
testTooFarInFuture3.timeout = 10
def testTooFarInFuture4(self):
# same as testTooFarInFuture2, but it is the listening server which
# only understands [2]
self.requireCrypto()
url, portnum = self.makeSpecificServer(certData_low,
NegotiationVbigOnly)
client = Tub(certData=certData_high)
client.startService()
self.services.append(client)
d = client.getReference(url)
def _oops_succeeded(rref):
self.fail("hey! this is supposed to fail")
def _check_failure(f):
f.trap(tokens.NegotiationError)
d.addCallbacks(_oops_succeeded, _check_failure)
return d
testTooFarInFuture4.timeout = 10
# disable all tests unless NEWPB_TEST_NEGOTIATION is set in the environment.
# The negotiation tests are sensitive to system load, and the intermittent
# failures are really annoying. The 'right' solution to this involves
# completely rearchitecting connection establishment, to provide debug/test
# hooks to get control in between the various phases. It also requires
# creating a loopback connection type (as a peer of TCP) which has
# deterministic timing behavior.
#import os
if False: #not os.environ.get("NEWPB_TEST_NEGOTIATION"):
del Basic
del Versus
del Parallel
del CrossfireReverse
del Crossfire
del Existing

View File

@ -0,0 +1,23 @@
# -*- test-case-name: foolscap.test_observer -*-
from twisted.trial import unittest
from twisted.internet import defer
from foolscap import observer
class Observer(unittest.TestCase):
def test_oneshot(self):
ol = observer.OneShotObserverList()
rep = repr(ol)
d1 = ol.whenFired()
d2 = ol.whenFired()
def _addmore(res):
self.failUnlessEqual(res, "result")
d3 = ol.whenFired()
d3.addCallback(self.failUnlessEqual, "result")
return d3
d1.addCallback(_addmore)
ol.fire("result")
rep = repr(ol)
d4 = ol.whenFired()
dl = defer.DeferredList([d1,d2,d4])
return dl

View File

@ -0,0 +1,703 @@
# -*- test-case-name: foolscap.test.test_pb -*-
import re
if False:
import sys
from twisted.python import log
log.startLogging(sys.stderr)
from twisted.python import failure, log
from twisted.internet import defer
from twisted.trial import unittest
from foolscap import tokens, referenceable
from foolscap import Tub, UnauthenticatedTub
from foolscap import getRemoteURL_TCP
from foolscap.tokens import BananaError, Violation, INT, STRING, OPEN
from foolscap.tokens import BananaFailure
from foolscap import broker, call
from foolscap.constraint import IConstraint
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
# we use authenticated tubs if possible. If crypto is not available, fall
# back to unauthenticated ones
GoodEnoughTub = UnauthenticatedTub
if crypto_available:
GoodEnoughTub = Tub
from foolscap.test.common import HelperTarget, RIHelper, TargetMixin
from foolscap.eventual import flushEventualQueue
from foolscap.test.common import Target, TargetWithoutInterfaces
class TestRequest(call.PendingRequest):
def __init__(self, reqID, rref=None):
self.answers = []
call.PendingRequest.__init__(self, reqID, rref)
def complete(self, res):
self.answers.append((True, res))
def fail(self, why):
self.answers.append((False, why))
class TestReferenceUnslicer(unittest.TestCase):
# OPEN(reference), INT(refid), [STR(interfacename), INT(version)]... CLOSE
def setUp(self):
self.broker = broker.Broker()
def tearDown(self):
return flushEventualQueue()
def newUnslicer(self):
unslicer = referenceable.ReferenceUnslicer()
unslicer.broker = self.broker
unslicer.opener = self.broker.rootUnslicer
return unslicer
def testReject(self):
u = self.newUnslicer()
self.failUnlessRaises(BananaError, u.checkToken, STRING, 10)
u = self.newUnslicer()
self.failUnlessRaises(BananaError, u.checkToken, OPEN, 0)
def testNoInterfaces(self):
u = self.newUnslicer()
u.checkToken(INT, 0)
u.receiveChild(12)
rr1,rr1d = u.receiveClose()
self.failUnless(rr1d is None)
rr2 = self.broker.getTrackerForYourReference(12).getRef()
self.failUnless(rr2)
self.failUnless(isinstance(rr2, referenceable.RemoteReference))
self.failUnlessEqual(rr2.tracker.broker, self.broker)
self.failUnlessEqual(rr2.tracker.clid, 12)
self.failUnlessEqual(rr2.tracker.interfaceName, None)
def testInterfaces(self):
u = self.newUnslicer()
u.checkToken(INT, 0)
u.receiveChild(12)
u.receiveChild("IBar")
rr1,rr1d = u.receiveClose()
self.failUnless(rr1d is None)
rr2 = self.broker.getTrackerForYourReference(12).getRef()
self.failUnless(rr2)
self.failUnlessIdentical(rr1, rr2)
self.failUnless(isinstance(rr2, referenceable.RemoteReference))
self.failUnlessEqual(rr2.tracker.broker, self.broker)
self.failUnlessEqual(rr2.tracker.clid, 12)
self.failUnlessEqual(rr2.tracker.interfaceName, "IBar")
class TestAnswer(unittest.TestCase):
# OPEN(answer), INT(reqID), [answer], CLOSE
def setUp(self):
self.broker = broker.Broker()
def tearDown(self):
return flushEventualQueue()
def newUnslicer(self):
unslicer = call.AnswerUnslicer()
unslicer.broker = self.broker
unslicer.opener = self.broker.rootUnslicer
unslicer.protocol = self.broker
return unslicer
def makeRequest(self):
req = call.PendingRequest(defer.Deferred())
def testAccept1(self):
req = TestRequest(12)
self.broker.addRequest(req)
u = self.newUnslicer()
u.checkToken(INT, 0)
u.receiveChild(12) # causes broker.getRequest
u.checkToken(STRING, 8)
u.receiveChild("results")
self.failIf(req.answers)
u.receiveClose() # causes broker.gotAnswer
self.failUnlessEqual(req.answers, [(True, "results")])
def testAccept2(self):
req = TestRequest(12)
req.setConstraint(IConstraint(str))
self.broker.addRequest(req)
u = self.newUnslicer()
u.checkToken(INT, 0)
u.receiveChild(12) # causes broker.getRequest
u.checkToken(STRING, 15)
u.receiveChild("results")
self.failIf(req.answers)
u.receiveClose() # causes broker.gotAnswer
self.failUnlessEqual(req.answers, [(True, "results")])
def testReject1(self):
# answer a non-existent request
req = TestRequest(12)
self.broker.addRequest(req)
u = self.newUnslicer()
u.checkToken(INT, 0)
self.failUnlessRaises(Violation, u.receiveChild, 13)
def testReject2(self):
# answer a request with a result that violates the constraint
req = TestRequest(12)
req.setConstraint(IConstraint(int))
self.broker.addRequest(req)
u = self.newUnslicer()
u.checkToken(INT, 0)
u.receiveChild(12)
self.failUnlessRaises(Violation, u.checkToken, STRING, 42)
# this does not yet errback the request
self.failIf(req.answers)
# it gets errbacked when banana reports the violation
v = Violation("icky")
v.setLocation("here")
u.reportViolation(BananaFailure(v))
self.failUnlessEqual(len(req.answers), 1)
err = req.answers[0]
self.failIf(err[0])
f = err[1]
self.failUnless(f.check(Violation))
class TestReferenceable(TargetMixin, unittest.TestCase):
# test how a Referenceable gets transformed into a RemoteReference as it
# crosses the wire, then verify that it gets transformed back into the
# original Referenceable when it comes back. Also test how shared
# references to the same object are handled.
def setUp(self):
TargetMixin.setUp(self)
self.setupBrokers()
if 0:
print
self.callingBroker.doLog = "TX"
self.targetBroker.doLog = " rx"
def send(self, arg):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set", obj=arg)
d.addCallback(self.failUnless)
d.addCallback(lambda res: target.obj)
return d
def send2(self, arg1, arg2):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set2", obj1=arg1, obj2=arg2)
d.addCallback(self.failUnless)
d.addCallback(lambda res: (target.obj1, target.obj2))
return d
def echo(self, arg):
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("echo", obj=arg)
return d
def testRef1(self):
# Referenceables turn into RemoteReferences
r = Target()
d = self.send(r)
d.addCallback(self._testRef1_1, r)
return d
def _testRef1_1(self, res, r):
t = res.tracker
self.failUnless(isinstance(res, referenceable.RemoteReference))
self.failUnlessEqual(t.broker, self.targetBroker)
self.failUnless(type(t.clid) is int)
self.failUnless(self.callingBroker.getMyReferenceByCLID(t.clid) is r)
self.failUnlessEqual(t.interfaceName, 'RIMyTarget')
def testRef2(self):
# sending a Referenceable over the wire multiple times should result
# in equivalent RemoteReferences
r = Target()
d = self.send(r)
d.addCallback(self._testRef2_1, r)
return d
def _testRef2_1(self, res1, r):
d = self.send(r)
d.addCallback(self._testRef2_2, res1)
return d
def _testRef2_2(self, res2, res1):
self.failUnless(res1 == res2)
self.failUnless(res1 is res2) # newpb does this, oldpb didn't
def testRef3(self):
# sending the same Referenceable in multiple arguments should result
# in equivalent RRs
r = Target()
d = self.send2(r, r)
d.addCallback(self._testRef3_1)
return d
def _testRef3_1(self, (res1, res2)):
self.failUnless(res1 == res2)
self.failUnless(res1 is res2)
def testRef4(self):
# sending the same Referenceable in multiple calls will result in
# equivalent RRs
r = Target()
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set", obj=r)
d.addCallback(self._testRef4_1, rr, r, target)
return d
def _testRef4_1(self, res, rr, r, target):
res1 = target.obj
d = rr.callRemote("set", obj=r)
d.addCallback(self._testRef4_2, target, res1)
return d
def _testRef4_2(self, res, target, res1):
res2 = target.obj
self.failUnless(res1 == res2)
self.failUnless(res1 is res2)
def testRef5(self):
# those RemoteReferences can be used to invoke methods on the sender.
# 'r' lives on side A. The anonymous target lives on side B. From
# side A we invoke B.set(r), and we get the matching RemoteReference
# 'rr' which lives on side B. Then we use 'rr' to invoke r.getName
# from side A.
r = Target()
r.name = "ernie"
d = self.send(r)
d.addCallback(lambda rr: rr.callRemote("getName"))
d.addCallback(self.failUnlessEqual, "ernie")
return d
def testRef6(self):
# Referenceables survive round-trips
r = Target()
d = self.echo(r)
d.addCallback(self.failUnlessIdentical, r)
return d
## def NOTtestRemoteRef1(self):
## # known URLRemoteReferences turn into Referenceables
## root = Target()
## rr, target = self.setupTarget(HelperTarget())
## self.targetBroker.factory = pb.PBServerFactory(root)
## urlRRef = self.callingBroker.remoteReferenceForName("", [])
## # urlRRef points at root
## d = rr.callRemote("set", obj=urlRRef)
## self.failUnless(dr(d))
## self.failUnlessIdentical(target.obj, root)
## def NOTtestRemoteRef2(self):
## # unknown URLRemoteReferences are errors
## root = Target()
## rr, target = self.setupTarget(HelperTarget())
## self.targetBroker.factory = pb.PBServerFactory(root)
## urlRRef = self.callingBroker.remoteReferenceForName("bogus", [])
## # urlRRef points at nothing
## d = rr.callRemote("set", obj=urlRRef)
## f = de(d)
## #print f
## #self.failUnlessEqual(f.type, tokens.Violation)
## self.failUnlessEqual(type(f.value), str)
## self.failUnless(f.value.find("unknown clid 'bogus'") != -1)
def testArgs1(self):
# sending the same non-Referenceable object in multiple calls results
# in distinct objects, because the serialization scope is bounded by
# each method call
r = [1,2]
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set", obj=r)
d.addCallback(self._testArgs1_1, rr, r, target)
# TODO: also make sure the original list goes out of scope once the
# method call has finished, to guard against a leaky
# reference-tracking implementation.
return d
def _testArgs1_1(self, res, rr, r, target):
res1 = target.obj
d = rr.callRemote("set", obj=r)
d.addCallback(self._testArgs1_2, target, res1)
return d
def _testArgs1_2(self, res, target, res1):
res2 = target.obj
self.failUnless(res1 == res2)
self.failIf(res1 is res2)
def testArgs2(self):
# but sending them as multiple arguments of the *same* method call
# results in identical objects
r = [1,2]
rr, target = self.setupTarget(HelperTarget())
d = rr.callRemote("set2", obj1=r, obj2=r)
d.addCallback(self._testArgs2_1, rr, target)
return d
def _testArgs2_1(self, res, rr, target):
self.failUnlessIdentical(target.obj1, target.obj2)
def testAnswer1(self):
# also, shared objects in a return value should be shared
r = [1,2]
rr, target = self.setupTarget(HelperTarget())
target.obj = (r,r)
d = rr.callRemote("get")
d.addCallback(lambda res: self.failUnlessIdentical(res[0], res[1]))
return d
def testAnswer2(self):
# but objects returned by separate method calls should be distinct
rr, target = self.setupTarget(HelperTarget())
r = [1,2]
target.obj = r
d = rr.callRemote("get")
d.addCallback(self._testAnswer2_1, rr, target)
return d
def _testAnswer2_1(self, res1, rr, target):
d = rr.callRemote("get")
d.addCallback(self._testAnswer2_2, res1)
return d
def _testAnswer2_2(self, res2, res1):
self.failUnless(res1 == res2)
self.failIf(res1 is res2)
class TestFactory(unittest.TestCase):
def setUp(self):
self.client = None
self.server = None
def gotReference(self, ref):
self.client = ref
def tearDown(self):
if self.client:
self.client.broker.transport.loseConnection()
if self.server:
d = self.server.stopListening()
else:
d = defer.succeed(None)
d.addCallback(flushEventualQueue)
return d
class TestCallable(unittest.TestCase):
def setUp(self):
self.services = [GoodEnoughTub(), GoodEnoughTub()]
self.tubA, self.tubB = self.services
for s in self.services:
s.startService()
l = s.listenOn("tcp:0:interface=127.0.0.1")
s.setLocation("127.0.0.1:%d" % l.getPortnum())
self._log_observers_to_remove = []
def addLogObserver(self, observer):
log.addObserver(observer)
self._log_observers_to_remove.append(observer)
def tearDown(self):
for lo in self._log_observers_to_remove:
log.removeObserver(lo)
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(flushEventualQueue)
return d
def testLogLocalFailure(self):
self.tubB.setOption("logLocalFailures", True)
target = Target()
logs = []
self.addLogObserver(logs.append)
url = self.tubB.registerReference(target)
d = self.tubA.getReference(url)
d.addCallback(lambda rref: rref.callRemote("fail"))
# this will cause some text to be logged with log.msg. TODO: capture
# this text and look at it more closely.
def _check(res):
self.failUnless(isinstance(res, failure.Failure))
res.trap(ValueError)
messages = [l['message'][0] for l in logs]
text = "\n".join(messages)
self.failUnless("an inbound callRemote that we executed (on behalf of someone else) failed\n" in text)
self.failUnless("\n reqID=2, rref=<foolscap.test.common.Target object at "
in text)
self.failUnless(", methname=fail\n" in text)
self.failUnless("\n args=[]\n" in text)
self.failUnless("\n kwargs={}\n" in text)
self.failUnless("\nLOCAL: Traceback (most recent call last):\n"
in text)
self.failUnless("\nLOCAL: exceptions.ValueError: you asked me to fail\n" in text)
d.addBoth(_check)
return d
def testLogRemoteFailure(self):
self.tubA.setOption("logRemoteFailures", True)
target = Target()
logs = []
self.addLogObserver(logs.append)
url = self.tubB.registerReference(target)
d = self.tubA.getReference(url)
d.addCallback(lambda rref: rref.callRemote("fail"))
# this will cause some text to be logged with log.msg. TODO: capture
# this text and look at it more closely.
def _check(res):
self.failUnless(isinstance(res, failure.Failure))
res.trap(ValueError)
messages = [l['message'][0] for l in logs]
text = "\n".join(messages)
self.failUnless("an outbound callRemote (that we sent to someone else) failed on the far end\n" in text)
self.failUnless("\n reqID=2, rref=<RemoteReference at "
in text)
self.failUnless((" [%s]>, methname=fail\n" % url) in text)
#self.failUnless("\n args=[]\n" in text) # TODO: log these too
#self.failUnless("\n kwargs={}\n" in text)
self.failUnless("\nREMOTE: Traceback from remote host -- Traceback (most recent call last):\n"
in text)
self.failUnless("\nREMOTE: exceptions.ValueError: you asked me to fail\n" in text)
d.addBoth(_check)
return d
def testBoundMethod(self):
target = Target()
meth_url = self.tubB.registerReference(target.remote_add)
d = self.tubA.getReference(meth_url)
d.addCallback(self._testBoundMethod_1)
return d
testBoundMethod.timeout = 5
def _testBoundMethod_1(self, ref):
self.failUnless(isinstance(ref, referenceable.RemoteMethodReference))
#self.failUnlessEqual(ref.getSchemaName(),
# RIMyTarget.__remote_name__ + "/remote_add")
d = ref.callRemote(a=1, b=2)
d.addCallback(lambda res: self.failUnlessEqual(res, 3))
return d
def testFunction(self):
l = []
# we need a keyword arg here
def append(what):
l.append(what)
func_url = self.tubB.registerReference(append)
d = self.tubA.getReference(func_url)
d.addCallback(self._testFunction_1, l)
return d
testFunction.timeout = 5
def _testFunction_1(self, ref, l):
self.failUnless(isinstance(ref, referenceable.RemoteMethodReference))
d = ref.callRemote(what=12)
d.addCallback(lambda res: self.failUnlessEqual(l, [12]))
return d
class TestService(unittest.TestCase):
def setUp(self):
self.services = [GoodEnoughTub()]
self.services[0].startService()
def tearDown(self):
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(flushEventualQueue)
return d
def testRegister(self):
s = self.services[0]
l = s.listenOn("tcp:0:interface=127.0.0.1")
s.setLocation("127.0.0.1:%d" % l.getPortnum())
t1 = Target()
public_url = s.registerReference(t1, "target")
if crypto_available:
self.failUnless(public_url.startswith("pb://"))
self.failUnless(public_url.endswith("@127.0.0.1:%d/target"
% l.getPortnum()))
else:
self.failUnlessEqual(public_url,
"pbu://127.0.0.1:%d/target"
% l.getPortnum())
self.failUnlessEqual(s.registerReference(t1, "target"), public_url)
self.failUnlessIdentical(s.getReferenceForURL(public_url), t1)
t2 = Target()
private_url = s.registerReference(t2)
self.failUnlessEqual(s.registerReference(t2), private_url)
self.failUnlessIdentical(s.getReferenceForURL(private_url), t2)
s.unregisterURL(public_url)
self.failUnlessRaises(KeyError, s.getReferenceForURL, public_url)
s.unregisterReference(t2)
self.failUnlessRaises(KeyError, s.getReferenceForURL, private_url)
# TODO: check what happens when you register the same referenceable
# under multiple URLs
def getRef(self, target):
self.services.append(GoodEnoughTub())
s1 = self.services[0]
s2 = self.services[1]
s2.startService()
l = s1.listenOn("tcp:0:interface=127.0.0.1")
s1.setLocation("127.0.0.1:%d" % l.getPortnum())
public_url = s1.registerReference(target, "target")
self.public_url = public_url
d = s2.getReference(public_url)
return d
def testConnect1(self):
t1 = TargetWithoutInterfaces()
d = self.getRef(t1)
d.addCallback(lambda ref: ref.callRemote('add', a=2, b=3))
d.addCallback(self._testConnect1, t1)
return d
testConnect1.timeout = 5
def _testConnect1(self, res, t1):
self.failUnlessEqual(t1.calls, [(2,3)])
self.failUnlessEqual(res, 5)
def testConnect2(self):
t1 = Target()
d = self.getRef(t1)
d.addCallback(lambda ref: ref.callRemote('add', a=2, b=3))
d.addCallback(self._testConnect2, t1)
return d
testConnect2.timeout = 5
def _testConnect2(self, res, t1):
self.failUnlessEqual(t1.calls, [(2,3)])
self.failUnlessEqual(res, 5)
def testConnect3(self):
# test that we can get the reference multiple times
t1 = Target()
d = self.getRef(t1)
d.addCallback(lambda ref: ref.callRemote('add', a=2, b=3))
def _check(res):
self.failUnlessEqual(t1.calls, [(2,3)])
self.failUnlessEqual(res, 5)
t1.calls = []
d.addCallback(_check)
d.addCallback(lambda res:
self.services[1].getReference(self.public_url))
d.addCallback(lambda ref: ref.callRemote('add', a=5, b=6))
def _check2(res):
self.failUnlessEqual(t1.calls, [(5,6)])
self.failUnlessEqual(res, 11)
d.addCallback(_check2)
return d
testConnect3.timeout = 5
def TODO_testStatic(self):
# make sure we can register static data too, at least hashable ones
t1 = (1,2,3)
d = self.getRef(t1)
d.addCallback(lambda ref: self.failUnlessEqual(ref, (1,2,3)))
return d
#testStatic.timeout = 2
def testBadMethod(self):
t1 = Target()
d = self.getRef(t1)
d.addCallback(lambda ref: ref.callRemote('missing', a=2, b=3))
d.addCallbacks(self._testBadMethod_cb, self._testBadMethod_eb)
return d
testBadMethod.timeout = 5
def _testBadMethod_cb(self, res):
self.fail("method wasn't supposed to work")
def _testBadMethod_eb(self, f):
#self.failUnlessEqual(f.type, 'foolscap.tokens.Violation')
self.failUnlessEqual(f.type, Violation)
self.failUnless(re.search(r'RIMyTarget\(.*\) does not offer missing',
str(f)))
def testBadMethod2(self):
t1 = TargetWithoutInterfaces()
d = self.getRef(t1)
d.addCallback(lambda ref: ref.callRemote('missing', a=2, b=3))
d.addCallbacks(self._testBadMethod_cb, self._testBadMethod2_eb)
return d
testBadMethod2.timeout = 5
def _testBadMethod2_eb(self, f):
self.failUnlessEqual(f.type, 'exceptions.AttributeError')
self.failUnlessSubstring("TargetWithoutInterfaces", f.value)
self.failUnlessSubstring(" has no attribute 'remote_missing'", f.value)
class ThreeWayHelper:
passed = False
def start(self):
d = getRemoteURL_TCP("127.0.0.1", self.portnum1, "", RIHelper)
d.addCallback(self.step2)
d.addErrback(self.err)
return d
def step2(self, remote1):
# .remote1 is our RRef to server1's "t1" HelperTarget
self.clients.append(remote1)
self.remote1 = remote1
d = getRemoteURL_TCP("127.0.0.1", self.portnum2, "", RIHelper)
d.addCallback(self.step3)
return d
def step3(self, remote2):
# and .remote2 is our RRef to server2's "t2" helper target
self.clients.append(remote2)
self.remote2 = remote2
# sending a RemoteReference back to its source should be ok
d = self.remote1.callRemote("set", obj=self.remote1)
d.addCallback(self.step4)
return d
def step4(self, res):
assert self.target1.obj is self.target1
# but sending one to someone else is not
d = self.remote2.callRemote("set", obj=self.remote1)
d.addCallback(self.step5_callback)
d.addErrback(self.step5_errback)
return d
def step5_callback(self, res):
why = unittest.FailTest("sending a 3rd-party reference did not fail")
self.err(failure.Failure(why))
return None
def step5_errback(self, why):
bad = None
if why.type != tokens.Violation:
bad = "%s failure should be a Violation" % why.type
elif why.value.args[0].find("RemoteReferences can only be sent back to their home Broker") == -1:
bad = "wrong error message: '%s'" % why.value.args[0]
if bad:
why = unittest.FailTest(bad)
self.passed = failure.Failure(why)
else:
self.passed = True
def err(self, why):
self.passed = why
# TODO:
# when the Violation is remote, it is reported in a CopiedFailure, which
# means f.type is a string. When it is local, it is reported in a Failure,
# and f.type is the tokens.Violation class. I'm not sure how I feel about
# these being different.
# TODO: tests to port from oldpb suite
# testTooManyRefs: sending pb.MAX_BROKER_REFS across the wire should die
# testFactoryCopy?
# tests which aren't relevant right now but which might be once we port the
# corresponding functionality:
#
# testObserve, testCache (pb.Cacheable)
# testViewPoint
# testPublishable (spread.publish??)
# SpreadUtilTestCase (spread.util)
# NewCredTestCase
# tests which aren't relevant and aren't like to ever be
#
# PagingTestCase
# ConnectionTestCase (oldcred)
# NSPTestCase

View File

@ -0,0 +1,254 @@
from twisted.trial import unittest
from twisted.python.failure import Failure
from foolscap.promise import makePromise, send, sendOnly, when, UsageError
from foolscap.eventual import flushEventualQueue, fireEventually
class KaboomError(Exception):
pass
class Target:
def __init__(self):
self.calls = []
def one(self, a):
self.calls.append(("one", a))
return a+1
def two(self, a, b=2, **kwargs):
self.calls.append(("two", a, b, kwargs))
def fail(self, arg):
raise KaboomError("kaboom!")
class Counter:
def __init__(self, count=0):
self.count = count
def add(self, value):
self.count += value
return self
class Send(unittest.TestCase):
def tearDown(self):
return flushEventualQueue()
def testBasic(self):
p,r = makePromise()
def _check(res, *args, **kwargs):
self.failUnlessEqual(res, 1)
self.failUnlessEqual(args, ("one",))
self.failUnlessEqual(kwargs, {"two": 2})
p2 = p._then(_check, "one", two=2)
self.failUnlessIdentical(p2, p)
r(1)
def testBasicFailure(self):
p,r = makePromise()
def _check(res, *args, **kwargs):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(KaboomError))
self.failUnlessEqual(args, ("one",))
self.failUnlessEqual(kwargs, {"two": 2})
p2 = p._except(_check, "one", two=2)
self.failUnlessIdentical(p2, p)
r(Failure(KaboomError("oops")))
def testSend(self):
t = Target()
p = send(t).one(1)
self.failIf(t.calls)
def _check(res):
self.failUnlessEqual(res, 2)
self.failUnlessEqual(t.calls, [("one", 1)])
p._then(_check)
when(p).addCallback(_check) # check it twice to test both syntaxes
def testOrdering(self):
t = Target()
p1 = send(t).one(1)
p2 = send(t).two(3, k="extra")
self.failIf(t.calls)
def _check1(res):
# we can't check t.calls here: the when() clause is not
# guaranteed to fire before the second send.
self.failUnlessEqual(res, 2)
when(p1).addCallback(_check1)
def _check2(res):
self.failUnlessEqual(res, None)
when(p2).addCallback(_check2)
def _check3(res):
self.failUnlessEqual(t.calls, [("one", 1),
("two", 3, 2, {"k": "extra"}),
])
fireEventually().addCallback(_check3)
def testFailure(self):
t = Target()
p1 = send(t).fail(0)
def _check(res):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(KaboomError))
p1._then(lambda res: self.fail("we were supposed to fail"))
p1._except(_check)
when(p1).addBoth(_check)
def testBadName(self):
t = Target()
p1 = send(t).missing(0)
def _check(res):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(AttributeError))
when(p1).addBoth(_check)
def testDisableDataflowStyle(self):
p,r = makePromise()
p._useDataflowStyle = False
def wrong(p):
p.one(12)
self.failUnlessRaises(AttributeError, wrong, p)
def testNoMultipleResolution(self):
p,r = makePromise()
r(3)
self.failUnlessRaises(UsageError, r, 4)
def testResolveBefore(self):
t = Target()
p,r = makePromise()
r(t)
p = send(p).one(2)
def _check(res):
self.failUnlessEqual(res, 3)
when(p).addCallback(_check)
def testResolveAfter(self):
t = Target()
p,r = makePromise()
p = send(p).one(2)
def _check(res):
self.failUnlessEqual(res, 3)
when(p).addCallback(_check)
r(t)
def testResolveFailure(self):
t = Target()
p,r = makePromise()
p = send(p).one(2)
def _check(res):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(KaboomError))
when(p).addBoth(_check)
f = Failure(KaboomError("oops"))
r(f)
class Call(unittest.TestCase):
def tearDown(self):
return flushEventualQueue()
def testResolveBefore(self):
t = Target()
p1,r = makePromise()
r(t)
p2 = p1.one(2)
def _check(res):
self.failUnlessEqual(res, 3)
p2._then(_check)
def testResolveAfter(self):
t = Target()
p1,r = makePromise()
p2 = p1.one(2)
def _check(res):
self.failUnlessEqual(res, 3)
p2._then(_check)
r(t)
def testResolveFailure(self):
t = Target()
p1,r = makePromise()
p2 = p1.one(2)
def _check(res):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(KaboomError))
p2._then(lambda res: self.fail("this was supposed to fail"))
p2._except(_check)
f = Failure(KaboomError("oops"))
r(f)
class SendOnly(unittest.TestCase):
def testNear(self):
t = Target()
sendOnly(t).one(1)
self.failIf(t.calls)
def _check(res):
self.failUnlessEqual(t.calls, [("one", 1)])
d = flushEventualQueue()
d.addCallback(_check)
return d
def testResolveBefore(self):
t = Target()
p,r = makePromise()
r(t)
sendOnly(p).one(1)
d = flushEventualQueue()
def _check(res):
self.failUnlessEqual(t.calls, [("one", 1)])
d.addCallback(_check)
return d
def testResolveAfter(self):
t = Target()
p,r = makePromise()
sendOnly(p).one(1)
r(t)
d = flushEventualQueue()
def _check(res):
self.failUnlessEqual(t.calls, [("one", 1)])
d.addCallback(_check)
return d
class Chained(unittest.TestCase):
def tearDown(self):
return flushEventualQueue()
def testResolveToAPromise(self):
p1,r1 = makePromise()
p2,r2 = makePromise()
def _check(res):
self.failUnlessEqual(res, 1)
p1._then(_check)
r1(p2)
def _continue(res):
r2(1)
flushEventualQueue().addCallback(_continue)
return when(p1)
def testResolveToABrokenPromise(self):
p1,r1 = makePromise()
p2,r2 = makePromise()
r1(p2)
def _continue(res):
r2(Failure(KaboomError("foom")))
flushEventualQueue().addCallback(_continue)
def _check2(res):
self.failUnless(isinstance(res, Failure))
self.failUnless(res.check(KaboomError))
d = when(p1)
d.addBoth(_check2)
return d
def testChained1(self):
p1,r = makePromise()
p2 = p1.add(2)
p3 = p2.add(3)
def _check(c):
self.failUnlessEqual(c.count, 5)
p3._then(_check)
r(Counter(0))
def testChained2(self):
p1,r = makePromise()
def _check(c, expected):
self.failUnlessEqual(c.count, expected)
p1.add(2).add(3)._then(_check, 6)
r(Counter(1))

View File

@ -0,0 +1,210 @@
# -*- test-case-name: foolscap.test.test_reconnector -*-
from twisted.trial import unittest
from foolscap import UnauthenticatedTub
from foolscap.test.common import HelperTarget
from twisted.internet.main import CONNECTION_LOST
from twisted.internet import defer, reactor
from foolscap.eventual import eventually, flushEventualQueue
from foolscap import negotiate
class AlwaysFailNegotiation(negotiate.Negotiation):
def evaluateHello(self, offer):
raise negotiate.NegotiationError("I always fail")
class Reconnector(unittest.TestCase):
def setUp(self):
self.services = [UnauthenticatedTub(), UnauthenticatedTub()]
self.tubA, self.tubB = self.services
for s in self.services:
s.startService()
l = s.listenOn("tcp:0:interface=127.0.0.1")
s.setLocation("127.0.0.1:%d" % l.getPortnum())
def tearDown(self):
d = defer.DeferredList([s.stopService() for s in self.services])
d.addCallback(flushEventualQueue)
return d
def test_try(self):
self.count = 0
self.attached = False
self.done = defer.Deferred()
target = HelperTarget("bob")
url = self.tubB.registerReference(target)
rc = self.tubA.connectTo(url, self._got_ref, "arg", kw="kwarg")
# at least make sure the stopConnecting method is present, even if we
# don't have a real test for it yet
self.failUnless(rc.stopConnecting)
return self.done
def _got_ref(self, rref, arg, kw):
self.failUnlessEqual(self.attached, False)
self.attached = True
self.failUnlessEqual(arg, "arg")
self.failUnlessEqual(kw, "kwarg")
self.count += 1
rref.notifyOnDisconnect(self._disconnected, self.count)
if self.count < 2:
# forcibly disconnect it
eventually(rref.tracker.broker.transport.loseConnection,
CONNECTION_LOST)
else:
self.done.callback("done")
def _disconnected(self, count):
self.failUnlessEqual(self.attached, True)
self.failUnlessEqual(count, self.count)
self.attached = False
def _connected(self, ref, notifiers, accumulate):
accumulate.append(ref)
if notifiers:
notifiers.pop(0).callback(ref)
def stall(self, timeout, res=None):
d = defer.Deferred()
reactor.callLater(timeout, d.callback, res)
return d
def test_retry(self):
tubC = UnauthenticatedTub()
connects = []
target = HelperTarget("bob")
url = self.tubB.registerReference(target, "target")
portb = self.tubB.getListeners()[0].getPortnum()
d1 = defer.Deferred()
notifiers = [d1]
self.services.remove(self.tubB)
d = self.tubB.stopService()
def _start_connecting(res):
# this will fail, since tubB is not listening anymore
self.rc = self.tubA.connectTo(url, self._connected,
notifiers, connects)
# give it a few tries, then start tubC listening on the same port
# that tubB used to, which should allow the connection to
# complete (since they're both UnauthenticatedTubs)
return self.stall(2)
d.addCallback(_start_connecting)
def _start_tubC(res):
self.failUnlessEqual(len(connects), 0)
self.services.append(tubC)
tubC.startService()
tubC.listenOn("tcp:%d:interface=127.0.0.1" % portb)
tubC.setLocation("127.0.0.1:%d" % portb)
url2 = tubC.registerReference(target, "target")
assert url2 == url
return d1
d.addCallback(_start_tubC)
def _connected(res):
self.failUnlessEqual(len(connects), 1)
self.rc.stopConnecting()
d.addCallback(_connected)
return d
def test_negotiate_fails_and_retry(self):
connects = []
target = HelperTarget("bob")
url = self.tubB.registerReference(target, "target")
l = self.tubB.getListeners()[0]
l.negotiationClass = AlwaysFailNegotiation
portb = l.getPortnum()
d1 = defer.Deferred()
notifiers = [d1]
self.rc = self.tubA.connectTo(url, self._connected,
notifiers, connects)
d = self.stall(2)
def _failed_a_few_times(res):
# the reconnector should have failed once or twice, since the
# negotiation would always fail.
self.failUnlessEqual(len(connects), 0)
# Now we fix tubB. We only touched the Listener, so re-doing the
# listenOn should clear it.
return self.tubB.stopListeningOn(l)
d.addCallback(_failed_a_few_times)
def _stopped(res):
self.tubB.listenOn("tcp:%d:interface=127.0.0.1" % portb)
# the next time the reconnector tries, it should succeed
return d1
d.addCallback(_stopped)
def _connected(res):
self.failUnlessEqual(len(connects), 1)
self.rc.stopConnecting()
d.addCallback(_connected)
return d
def test_lose_and_retry(self):
tubC = UnauthenticatedTub()
connects = []
d1 = defer.Deferred()
d2 = defer.Deferred()
notifiers = [d1, d2]
target = HelperTarget("bob")
url = self.tubB.registerReference(target, "target")
portb = self.tubB.getListeners()[0].getPortnum()
self.rc = self.tubA.connectTo(url, self._connected,
notifiers, connects)
def _connected_first(res):
# we are now connected to tubB. Shut it down to force a
# disconnect.
self.services.remove(self.tubB)
d = self.tubB.stopService()
return d
d1.addCallback(_connected_first)
def _wait(res):
# wait a few seconds to give the Reconnector a chance to try and
# fail a few times
return self.stall(2)
d1.addCallback(_wait)
def _start_tubC(res):
# now start tubC listening on the same port that tubB used to,
# which should allow the connection to complete (since they're
# both UnauthenticatedTubs)
self.services.append(tubC)
tubC.startService()
tubC.listenOn("tcp:%d:interface=127.0.0.1" % portb)
tubC.setLocation("127.0.0.1:%d" % portb)
url2 = tubC.registerReference(target, "target")
assert url2 == url
# this will fire when the second connection has been made
return d2
d1.addCallback(_start_tubC)
def _connected(res):
self.failUnlessEqual(len(connects), 2)
self.rc.stopConnecting()
d1.addCallback(_connected)
return d1
def test_stop_trying(self):
connects = []
target = HelperTarget("bob")
url = self.tubB.registerReference(target, "target")
d1 = defer.Deferred()
self.services.remove(self.tubB)
d = self.tubB.stopService()
def _start_connecting(res):
# this will fail, since tubB is not listening anymore
self.rc = self.tubA.connectTo(url, self._connected, d1, connects)
self.rc.verbose = True # get better code coverage
# give it a few tries, then tell it to stop trying
return self.stall(2)
d.addCallback(_start_connecting)
def _stop_trying(res):
self.failUnlessEqual(len(connects), 0)
# this stopConnecting occurs while the reconnector's timer is
# active
self.rc.stopConnecting()
d.addCallback(_stop_trying)
# if it keeps trying, we'll see a dirty reactor
return d
# another test: determine the target url early, but don't actually register
# the reference yet. Start the reconnector, let it fail once, then register
# the reference and make sure the retry succeeds. This will distinguish
# between connection/negotiation failures and object-lookup failures, both of
# which ought to be handled by Reconnector. I suspect the object-lookup
# failures are not yet.
# test that Tub shutdown really stops all Reconnectors

View File

@ -0,0 +1,52 @@
# -*- test-case-name: foolscap.test.test_registration -*-
from twisted.trial import unittest
import weakref, gc
from foolscap import UnauthenticatedTub
from foolscap.test.common import HelperTarget
class Registration(unittest.TestCase):
def testStrong(self):
t1 = HelperTarget()
tub = UnauthenticatedTub()
tub.setLocation("bogus:1234567")
u1 = tub.registerReference(t1)
results = []
w1 = weakref.ref(t1, results.append)
del t1
gc.collect()
# t1 should still be alive
self.failUnless(w1())
self.failUnlessEqual(results, [])
tub.unregisterReference(w1())
gc.collect()
# now it should be dead
self.failIf(w1())
self.failUnlessEqual(len(results), 1)
def testWeak(self):
t1 = HelperTarget()
tub = UnauthenticatedTub()
tub.setLocation("bogus:1234567")
name = tub._assignName(t1)
url = tub.buildURL(name)
results = []
w1 = weakref.ref(t1, results.append)
del t1
gc.collect()
# t1 should be dead
self.failIf(w1())
self.failUnlessEqual(len(results), 1)
def TODO_testNonweakrefable(self):
# what happens when we register a non-Referenceable? We don't really
# need this yet, but as registerReference() becomes more generalized
# into just plain register(), we'll want to provide references to
# Copyables and ordinary data structures too. Let's just test that
# this doesn't cause an error.
target = []
tub = UnauthenticatedTub()
tub.setLocation("bogus:1234567")
url = tub.registerReference(target)

View File

@ -0,0 +1,480 @@
import sets, re
from twisted.trial import unittest
from foolscap import schema, copyable
from foolscap.tokens import Violation
from foolscap.constraint import IConstraint
from foolscap.remoteinterface import RemoteMethodSchema, \
RemoteInterfaceConstraint, LocalInterfaceConstraint
from foolscap.referenceable import RemoteReferenceTracker, \
RemoteReference, Referenceable
from foolscap.test import common
have_builtin_set = False
try:
set
have_builtin_set = True
except NameError:
pass # oh well
class Dummy:
pass
HEADER = 64
INTSIZE = HEADER+1
STR10 = HEADER+1+10
class ConformTest(unittest.TestCase):
"""This tests how Constraints are asserted on outbound objects (where the
object already exists). Inbound constraints are checked in
test_banana.InboundByteStream in the various testConstrainedFoo methods.
"""
def conforms(self, c, obj):
c.checkObject(obj, False)
def violates(self, c, obj):
self.assertRaises(schema.Violation, c.checkObject, obj, False)
def assertSize(self, c, maxsize):
return
self.assertEquals(c.maxSize(), maxsize)
def assertDepth(self, c, maxdepth):
self.assertEquals(c.maxDepth(), maxdepth)
def assertUnboundedSize(self, c):
self.assertRaises(schema.UnboundedSchema, c.maxSize)
def assertUnboundedDepth(self, c):
self.assertRaises(schema.UnboundedSchema, c.maxDepth)
def testAny(self):
c = schema.Constraint()
self.assertUnboundedSize(c)
self.assertUnboundedDepth(c)
def testInteger(self):
# s_int32_t
c = schema.IntegerConstraint()
self.assertSize(c, INTSIZE)
self.assertDepth(c, 1)
self.conforms(c, 123)
self.violates(c, 2**64)
self.conforms(c, 0)
self.conforms(c, 2**31-1)
self.violates(c, 2**31)
self.conforms(c, -2**31)
self.violates(c, -2**31-1)
self.violates(c, "123")
self.violates(c, Dummy())
self.violates(c, None)
def testLargeInteger(self):
c = schema.IntegerConstraint(64)
self.assertSize(c, INTSIZE+64)
self.assertDepth(c, 1)
self.conforms(c, 123)
self.violates(c, "123")
self.violates(c, None)
self.conforms(c, 2**512-1)
self.violates(c, 2**512)
self.conforms(c, -2**512+1)
self.violates(c, -2**512)
def testString(self):
c = schema.StringConstraint(10)
self.assertSize(c, STR10)
self.assertSize(c, STR10) # twice to test seen=[] logic
self.assertDepth(c, 1)
self.conforms(c, "I'm short")
self.violates(c, "I am too long")
self.conforms(c, "a" * 10)
self.violates(c, "a" * 11)
self.violates(c, 123)
self.violates(c, Dummy())
self.violates(c, None)
c2 = schema.StringConstraint(15, 10)
self.violates(c2, "too short")
self.conforms(c2, "long enough")
self.violates(c2, "this is too long")
c3 = schema.StringConstraint(regexp="needle")
self.violates(c3, "no present")
self.conforms(c3, "needle in a haystack")
c4 = schema.StringConstraint(regexp="[abc]+")
self.violates(c4, "spelled entirely without those letters")
self.conforms(c4, "add better cases")
c5 = schema.StringConstraint(regexp=re.compile("\d+\s\w+"))
self.conforms(c5, ": 123 boo")
self.violates(c5, "more than 1 spaces")
self.violates(c5, "letters first 123")
def testBool(self):
c = schema.BooleanConstraint()
self.assertSize(c, 147)
self.assertDepth(c, 2)
self.conforms(c, False)
self.conforms(c, True)
self.violates(c, 0)
self.violates(c, 1)
self.violates(c, "vrai")
self.violates(c, Dummy())
self.violates(c, None)
def testPoly(self):
c = schema.PolyConstraint(schema.StringConstraint(100),
schema.IntegerConstraint())
self.assertSize(c, 165)
self.assertDepth(c, 1)
def testTuple(self):
c = schema.TupleConstraint(schema.StringConstraint(10),
schema.StringConstraint(100),
schema.IntegerConstraint() )
self.conforms(c, ("hi", "there buddy, you're number", 1))
self.violates(c, "nope")
self.violates(c, ("string", "string", "NaN"))
self.violates(c, ("string that is too long", "string", 1))
self.violates(c, ["Are tuples", "and lists the same?", 0])
self.assertSize(c, 72+75+165+73)
self.assertDepth(c, 2)
def testNestedTuple(self):
inner = schema.TupleConstraint(schema.StringConstraint(10),
schema.IntegerConstraint())
self.assertSize(inner, 72+75+73)
self.assertDepth(inner, 2)
outer = schema.TupleConstraint(schema.StringConstraint(100),
inner)
self.assertSize(outer, 72+165 + 72+75+73)
self.assertDepth(outer, 3)
self.conforms(inner, ("hi", 2))
self.conforms(outer, ("long string here", ("short", 3)))
self.violates(outer, (("long string here", ("short", 3, "extra"))))
self.violates(outer, (("long string here", ("too long string", 3))))
outer2 = schema.TupleConstraint(inner, inner)
self.assertSize(outer2, 72+ 2*(72+75+73))
self.assertDepth(outer2, 3)
self.conforms(outer2, (("hi", 1), ("there", 2)) )
self.violates(outer2, ("hi", 1, "flat", 2) )
def testUnbounded(self):
big = schema.StringConstraint(None)
self.assertUnboundedSize(big)
self.assertDepth(big, 1)
self.conforms(big, "blah blah blah blah blah" * 1024)
self.violates(big, 123)
bag = schema.TupleConstraint(schema.IntegerConstraint(),
big)
self.assertUnboundedSize(bag)
self.assertDepth(bag, 2)
polybag = schema.PolyConstraint(schema.IntegerConstraint(),
bag)
self.assertUnboundedSize(polybag)
self.assertDepth(polybag, 2)
def testRecursion(self):
# we have to fiddle with PolyConstraint's innards
value = schema.ChoiceOf(schema.StringConstraint(),
schema.IntegerConstraint(),
# will add 'value' here
)
self.assertSize(value, 1065)
self.assertDepth(value, 1)
self.conforms(value, "key")
self.conforms(value, 123)
self.violates(value, [])
mapping = schema.TupleConstraint(schema.StringConstraint(10),
value)
self.assertSize(mapping, 72+75+1065)
self.assertDepth(mapping, 2)
self.conforms(mapping, ("name", "key"))
self.conforms(mapping, ("name", 123))
value.alternatives = value.alternatives + (mapping,)
self.assertUnboundedSize(value)
self.assertUnboundedDepth(value)
self.assertUnboundedSize(mapping)
self.assertUnboundedDepth(mapping)
# but note that the constraint can still be applied
self.conforms(mapping, ("name", 123))
self.conforms(mapping, ("name", "key"))
self.conforms(mapping, ("name", ("key", "value")))
self.conforms(mapping, ("name", ("key", 123)))
self.violates(mapping, ("name", ("key", [])))
l = []
l.append(l)
self.violates(mapping, ("name", l))
def testList(self):
l = schema.ListOf(schema.StringConstraint(10))
self.assertSize(l, 71 + 30*75)
self.assertDepth(l, 2)
self.conforms(l, ["one", "two", "three"])
self.violates(l, ("can't", "fool", "me"))
self.violates(l, ["but", "perspicacity", "is too long"])
self.violates(l, [0, "numbers", "allowed"])
self.conforms(l, ["short", "sweet"])
l2 = schema.ListOf(schema.StringConstraint(10), 3)
self.assertSize(l2, 71 + 3*75)
self.assertDepth(l2, 2)
self.conforms(l2, ["the number", "shall be", "three"])
self.violates(l2, ["five", "is", "...", "right", "out"])
l3 = schema.ListOf(schema.StringConstraint(10), None)
self.assertUnboundedSize(l3)
self.assertDepth(l3, 2)
self.conforms(l3, ["long"] * 35)
self.violates(l3, ["number", 1, "rule", "is", 0, "numbers"])
l4 = schema.ListOf(schema.StringConstraint(10), 3, 3)
self.conforms(l4, ["three", "is", "good"])
self.violates(l4, ["but", "four", "is", "bad"])
self.violates(l4, ["two", "too"])
def testSet(self):
l = schema.SetOf(schema.IntegerConstraint(), 3)
self.assertDepth(l, 2)
self.conforms(l, sets.Set([]))
self.conforms(l, sets.Set([1]))
self.conforms(l, sets.Set([1,2,3]))
self.violates(l, sets.Set([1,2,3,4]))
self.violates(l, sets.Set(["not a number"]))
self.conforms(l, sets.ImmutableSet([]))
self.conforms(l, sets.ImmutableSet([1]))
self.conforms(l, sets.ImmutableSet([1,2,3]))
self.violates(l, sets.ImmutableSet([1,2,3,4]))
self.violates(l, sets.ImmutableSet(["not a number"]))
if have_builtin_set:
self.conforms(l, set([]))
self.conforms(l, set([1]))
self.conforms(l, set([1,2,3]))
self.violates(l, set([1,2,3,4]))
self.violates(l, set(["not a number"]))
self.conforms(l, frozenset([]))
self.conforms(l, frozenset([1]))
self.conforms(l, frozenset([1,2,3]))
self.violates(l, frozenset([1,2,3,4]))
self.violates(l, frozenset(["not a number"]))
l = schema.SetOf(schema.IntegerConstraint(), 3, True)
self.conforms(l, sets.Set([]))
self.conforms(l, sets.Set([1]))
self.conforms(l, sets.Set([1,2,3]))
self.violates(l, sets.Set([1,2,3,4]))
self.violates(l, sets.Set(["not a number"]))
self.violates(l, sets.ImmutableSet([]))
self.violates(l, sets.ImmutableSet([1]))
self.violates(l, sets.ImmutableSet([1,2,3]))
self.violates(l, sets.ImmutableSet([1,2,3,4]))
self.violates(l, sets.ImmutableSet(["not a number"]))
if have_builtin_set:
self.conforms(l, set([]))
self.conforms(l, set([1]))
self.conforms(l, set([1,2,3]))
self.violates(l, set([1,2,3,4]))
self.violates(l, set(["not a number"]))
self.violates(l, frozenset([]))
self.violates(l, frozenset([1]))
self.violates(l, frozenset([1,2,3]))
self.violates(l, frozenset([1,2,3,4]))
self.violates(l, frozenset(["not a number"]))
l = schema.SetOf(schema.IntegerConstraint(), 3, False)
self.violates(l, sets.Set([]))
self.violates(l, sets.Set([1]))
self.violates(l, sets.Set([1,2,3]))
self.violates(l, sets.Set([1,2,3,4]))
self.violates(l, sets.Set(["not a number"]))
self.conforms(l, sets.ImmutableSet([]))
self.conforms(l, sets.ImmutableSet([1]))
self.conforms(l, sets.ImmutableSet([1,2,3]))
self.violates(l, sets.ImmutableSet([1,2,3,4]))
self.violates(l, sets.ImmutableSet(["not a number"]))
if have_builtin_set:
self.violates(l, set([]))
self.violates(l, set([1]))
self.violates(l, set([1,2,3]))
self.violates(l, set([1,2,3,4]))
self.violates(l, set(["not a number"]))
self.conforms(l, frozenset([]))
self.conforms(l, frozenset([1]))
self.conforms(l, frozenset([1,2,3]))
self.violates(l, frozenset([1,2,3,4]))
self.violates(l, frozenset(["not a number"]))
def testDict(self):
d = schema.DictOf(schema.StringConstraint(10),
schema.IntegerConstraint(),
maxKeys=4)
self.assertDepth(d, 2)
self.conforms(d, {"a": 1, "b": 2})
self.conforms(d, {"foo": 123, "bar": 345, "blah": 456, "yar": 789})
self.violates(d, None)
self.violates(d, 12)
self.violates(d, ["nope"])
self.violates(d, ("nice", "try"))
self.violates(d, {1:2, 3:4})
self.violates(d, {"a": "b"})
self.violates(d, {"a": 1, "b": 2, "c": 3, "d": 4, "toomuch": 5})
def testAttrDict(self):
d = copyable.AttributeDictConstraint(('a', int), ('b', str))
self.conforms(d, {"a": 1, "b": "string"})
self.violates(d, {"a": 1, "b": 2})
self.violates(d, {"a": 1, "b": "string", "c": "is a crowd"})
d = copyable.AttributeDictConstraint(('a', int), ('b', str),
ignoreUnknown=True)
self.conforms(d, {"a": 1, "b": "string"})
self.violates(d, {"a": 1, "b": 2})
self.conforms(d, {"a": 1, "b": "string", "c": "is a crowd"})
d = copyable.AttributeDictConstraint(attributes={"a": int, "b": str})
self.conforms(d, {"a": 1, "b": "string"})
self.violates(d, {"a": 1, "b": 2})
self.violates(d, {"a": 1, "b": "string", "c": "is a crowd"})
class CreateTest(unittest.TestCase):
def check(self, obj, expected):
self.failUnless(isinstance(obj, expected))
def testMakeConstraint(self):
make = IConstraint
c = make(int)
self.check(c, schema.IntegerConstraint)
self.failUnlessEqual(c.maxBytes, -1)
c = make(str)
self.check(c, schema.StringConstraint)
self.failUnlessEqual(c.maxLength, 1000)
self.check(make(bool), schema.BooleanConstraint)
self.check(make(float), schema.NumberConstraint)
self.check(make(schema.NumberConstraint()), schema.NumberConstraint)
c = make((int, str))
self.check(c, schema.TupleConstraint)
self.check(c.constraints[0], schema.IntegerConstraint)
self.check(c.constraints[1], schema.StringConstraint)
c = make(common.RIHelper)
self.check(c, RemoteInterfaceConstraint)
self.failUnlessEqual(c.interface, common.RIHelper)
c = make(common.IFoo)
self.check(c, LocalInterfaceConstraint)
self.failUnlessEqual(c.interface, common.IFoo)
c = make(Referenceable)
self.check(c, RemoteInterfaceConstraint)
self.failUnlessEqual(c.interface, None)
class Arguments(unittest.TestCase):
def test_arguments(self):
def foo(a=int, b=bool, c=int): return str
r = RemoteMethodSchema(method=foo)
getpos = r.getPositionalArgConstraint
getkw = r.getKeywordArgConstraint
self.failUnless(isinstance(getpos(0)[1], schema.IntegerConstraint))
self.failUnless(isinstance(getpos(1)[1], schema.BooleanConstraint))
self.failUnless(isinstance(getpos(2)[1], schema.IntegerConstraint))
self.failUnless(isinstance(getkw("a")[1], schema.IntegerConstraint))
self.failUnless(isinstance(getkw("b")[1], schema.BooleanConstraint))
self.failUnless(isinstance(getkw("c")[1], schema.IntegerConstraint))
self.failUnless(isinstance(r.getResponseConstraint(),
schema.StringConstraint))
self.failUnless(isinstance(getkw("c", 1, [])[1],
schema.IntegerConstraint))
self.failUnlessRaises(schema.Violation, getkw, "a", 1, [])
self.failUnlessRaises(schema.Violation, getkw, "b", 1, ["b"])
self.failUnlessRaises(schema.Violation, getkw, "a", 2, [])
self.failUnless(isinstance(getkw("c", 2, [])[1],
schema.IntegerConstraint))
self.failUnless(isinstance(getkw("c", 0, ["a", "b"])[1],
schema.IntegerConstraint))
try:
r.checkAllArgs((1,True,2), {}, False)
r.checkAllArgs((), {"a":1, "b":False, "c":2}, False)
r.checkAllArgs((1,), {"b":False, "c":2}, False)
r.checkAllArgs((1,True), {"c":3}, False)
r.checkResults("good", False)
except schema.Violation:
self.fail("that shouldn't have raised a Violation")
self.failUnlessRaises(schema.Violation, # 2 is not bool
r.checkAllArgs, (1,2,3), {}, False)
self.failUnlessRaises(schema.Violation, # too many
r.checkAllArgs, (1,True,3,4), {}, False)
self.failUnlessRaises(schema.Violation, # double "a"
r.checkAllArgs, (1,), {"a":1, "b":True, "c": 3},
False)
self.failUnlessRaises(schema.Violation, # missing required "b"
r.checkAllArgs, (1,), {"c": 3}, False)
self.failUnlessRaises(schema.Violation, # missing required "a"
r.checkAllArgs, (), {"b":True, "c": 3}, False)
self.failUnlessRaises(schema.Violation,
r.checkResults, 12, False)
class Interfaces(unittest.TestCase):
def check_inbound(self, obj, constraint):
try:
constraint.checkObject(obj, True)
except Violation, f:
self.fail("constraint was violated: %s" % f)
def check_outbound(self, obj, constraint):
try:
constraint.checkObject(obj, False)
except Violation, f:
self.fail("constraint was violated: %s" % f)
def violates_inbound(self, obj, constraint):
try:
constraint.checkObject(obj, True)
except Violation, f:
return
self.fail("constraint wasn't violated")
def violates_outbound(self, obj, constraint):
try:
constraint.checkObject(obj, False)
except Violation, f:
return
self.fail("constraint wasn't violated")
def test_referenceable(self):
h = common.HelperTarget()
c1 = RemoteInterfaceConstraint(common.RIHelper)
c2 = RemoteInterfaceConstraint(common.RIMyTarget)
self.violates_inbound("bogus", c1)
self.violates_outbound("bogus", c1)
self.check_outbound(h, c1)
self.violates_inbound(h, c1)
self.violates_inbound(h, c2)
self.violates_outbound(h, c2)
def test_remotereference(self):
# we need to create a fake RemoteReference here
parent, clid, url = None, 0, ""
interfaceName = common.RIHelper.__remote_name__
tracker = RemoteReferenceTracker(parent, clid, url, interfaceName)
rr = RemoteReference(tracker)
c1 = RemoteInterfaceConstraint(common.RIHelper)
c2 = RemoteInterfaceConstraint(common.RIMyTarget)
self.check_inbound(rr, c1)
self.violates_outbound(rr, c1)
self.violates_inbound(rr, c2)
self.violates_outbound(rr, c2)

View File

@ -0,0 +1,30 @@
from twisted.trial import unittest
from foolscap import referenceable
class URL(unittest.TestCase):
def testURL(self):
sr = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name")
self.failUnlessEqual(sr.tubID, "1234")
self.failUnlessEqual(sr.locationHints, ["127.0.0.1:9900"])
self.failUnlessEqual(sr.name, "name")
def testCompare(self):
sr1 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name")
sr2 = referenceable.SturdyRef("pb://1234@127.0.0.1:9999/name")
# only tubID and name matter
self.failUnlessEqual(sr1, sr2)
sr1 = referenceable.SturdyRef("pb://9999@127.0.0.1:9900/name")
sr2 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name")
self.failIfEqual(sr1, sr2)
sr1 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name1")
sr2 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name2")
self.failIfEqual(sr1, sr2)
def testLocationHints(self):
url = "pb://ABCD@127.0.0.1:9900,remote:8899/name"
sr = referenceable.SturdyRef(url)
self.failUnlessEqual(sr.tubID, "ABCD")
self.failUnlessEqual(sr.locationHints, ["127.0.0.1:9900",
"remote:8899"])
self.failUnlessEqual(sr.name, "name")

View File

@ -0,0 +1,40 @@
# -*- test-case-name: foolscap.test.test_tub -*-
import os.path
from twisted.trial import unittest
crypto_available = False
try:
from foolscap import crypto
crypto_available = crypto.available
except ImportError:
pass
from foolscap import Tub
class TestCertFile(unittest.TestCase):
def test_generate(self):
t = Tub()
certdata = t.getCertData()
self.failUnless("BEGIN CERTIFICATE" in certdata)
self.failUnless("BEGIN RSA PRIVATE KEY" in certdata)
def test_certdata(self):
t1 = Tub()
data1 = t1.getCertData()
t2 = Tub(certData=data1)
data2 = t2.getCertData()
self.failUnless(data1 == data2)
def test_certfile(self):
fn = "test_tub.TestCertFile.certfile"
t1 = Tub(certFile=fn)
self.failUnless(os.path.exists(fn))
data1 = t1.getCertData()
t2 = Tub(certFile=fn)
data2 = t2.getCertData()
self.failUnless(data1 == data2)
if not crypto_available:
del TestCertFile

View File

@ -0,0 +1,369 @@
from twisted.python.failure import Failure
from zope.interface import Attribute, Interface
# delimiter characters.
LIST = chr(0x80) # old
INT = chr(0x81)
STRING = chr(0x82)
NEG = chr(0x83)
FLOAT = chr(0x84)
# "optional" -- these might be refused by a low-level implementation.
LONGINT = chr(0x85) # old
LONGNEG = chr(0x86) # old
# really optional; this is is part of the 'pb' vocabulary
VOCAB = chr(0x87)
# newbanana tokens
OPEN = chr(0x88)
CLOSE = chr(0x89)
ABORT = chr(0x8A)
ERROR = chr(0x8D)
PING = chr(0x8E)
PONG = chr(0x8F)
tokenNames = {
LIST: "LIST",
INT: "INT",
STRING: "STRING",
NEG: "NEG",
FLOAT: "FLOAT",
LONGINT: "LONGINT",
LONGNEG: "LONGNEG",
VOCAB: "VOCAB",
OPEN: "OPEN",
CLOSE: "CLOSE",
ABORT: "ABORT",
ERROR: "ERROR",
PING: "PING",
PONG: "PONG",
}
SIZE_LIMIT = 1000 # default limit on the body length of long tokens (STRING,
# LONGINT, LONGNEG, ERROR)
class InvalidRemoteInterface(Exception):
pass
class UnknownSchemaType(Exception):
pass
class Violation(Exception):
"""This exception is raised in response to a schema violation. It
indicates that the incoming token stream has violated a constraint
imposed by the recipient. The current Unslicer is abandoned and the
error is propagated upwards to the enclosing Unslicer parent by
providing an BananaFailure object to the parent's .receiveChild method.
All remaining tokens for the current Unslicer are to be dropped.
"""
""".where: this string describes which node of the object graph was
being handled when the exception took place."""
where = ""
def setLocation(self, where):
self.where = where
def getLocation(self):
return self.where
def prependLocation(self, prefix):
if self.where:
self.where = prefix + " " + self.where
else:
self.where = prefix
def appendLocation(self, suffix):
if self.where:
self.where = self.where + " " + suffix
else:
self.where = suffix
def __str__(self):
if self.where:
return "Violation (%s): %s" % (self.where, self.args)
else:
return "Violation: %s" % (self.args,)
class BananaError(Exception):
"""This exception is raised in response to a fundamental protocol
violation. The connection should be dropped immediately.
.where is an optional string that describes the node of the object graph
where the failure was noticed.
"""
where = None
def __str__(self):
if self.where:
return "BananaError(in %s): %s" % (self.where, self.args)
else:
return "BananaError: %s" % (self.args,)
class NegotiationError(Exception):
pass
class RemoteNegotiationError(Exception):
"""The other end hung up on us because they had a NegotiationError on
their side."""
pass
class PBError(Exception):
pass
class BananaFailure(Failure):
"""This is a marker subclass of Failure, to let Unslicer.receiveChild
distinguish between an unserialized Failure instance and a a failure in
a child Unslicer"""
pass
class ISlicer(Interface):
"""I know how to slice objects into tokens."""
sendOpen = Attribute(\
"""True if an OPEN/CLOSE token pair should be sent around the Slicer's body
tokens. Only special-purpose Slicers (like the RootSlicer) should use False.
""")
trackReferences = Attribute(\
"""True if the object we slice is referenceable: i.e. it is useful or
necessary to send multiple copies as a single instance and a bunch of
References, rather than as separate copies. Instances are referenceable, as
are mutable containers like lists.""")
streamable = Attribute(\
"""True if children of this object are allowed to use Deferreds to stall
production of new tokens. This must be set in slice() before yielding each
child object, and affects that child and all descendants. Streaming is only
allowed if the parent also allows streaming: if slice() is called with
streamable=False, then self.streamable must be False too. It can be changed
from within the slice() generator at any time as long as this restriction is
obeyed.
This attribute is read when each child Slicer is started.""")
def slice(streamable, banana):
"""Return an iterator which provides Index Tokens and the Body
Tokens of the object's serialized form. This is frequently
implemented with a generator (i.e. 'yield' appears in the body of
this function). Do not yield the OPEN or the CLOSE token, those will
be handled elsewhere.
If a Violation exception is raised, slicing will cease. An ABORT
token followed by a CLOSE token will be emitted.
If 'streamable' is True, the iterator may yield a Deferred to
indicate that slicing should wait until the Deferred is fired. If
the Deferred is errbacked, the connection will be dropped. TODO: it
should be possible to errback with a Violation."""
def registerReference(refid, obj):
"""Register the relationship between 'refid' (a number taken from
the cumulative count of OPEN tokens sent over our connection: 0 is
the object described by the very first OPEN sent over the wire) and
the object. If the object is sent a second time, a Reference may be
used in its place.
Slicers usually delgate this function upwards to the RootSlicer, but
it can be handled at any level to allow local scoping of references
(they might only be valid within a single RPC invocation, for
example).
This method is *not* allowed to raise a Violation, as that will mess
up the transmit logic. If it raises any other exception, the
connection will be dropped."""
def childAborted(f):
"""Notify the Slicer that one of its child slicers (as produced by
its .slice iterator) has caused an error. If the slicer got started,
it has now emitted an ABORT token and terminated its token stream.
If it did not get started (usually because the child object was
unserializable), there has not yet been any trace of the object in
the token stream.
The corresponding Unslicer (receiving this token stream) will get an
BananaFailure and is likely to ignore any remaining tokens from us,
so it may be reasonable for the parent Slicer to give up as well.
If the Slicer wishes to abandon their own sequence, it should simply
return the failure object passed in. If it wants to absorb the
error, it should return None."""
def slicerForObject(obj):
"""Get a new Slicer for some child object. Slicers usually delegate
this method up to the RootSlicer. References are handled by
producing a ReferenceSlicer here. These references can have various
scopes.
If something on the stack does not want the object to be sent, it can
raise a Violation exception. This is the 'taster' function."""
def describe():
"""Return a short string describing where in the object tree this
slicer is sitting, relative to its parent. These strings are
obtained from every slicer in the stack, and joined to describe
where any problems occurred."""
class IRootSlicer(Interface):
def allowStreaming(streamable):
"""Specify whether or not child Slicers will be allowed to stream."""
def connectionLost(why):
"""Called when the transport is closed. The RootSlicer may choose to
abandon objects being sent here."""
class IUnslicer(Interface):
# .parent
# start/receiveChild/receiveClose/finish are
# the main "here are some tokens, make an object out of them" entry
# points used by Unbanana.
# start/receiveChild can call self.protocol.abandonUnslicer(failure,
# self) to tell the protocol that the unslicer has given up on life and
# all its remaining tokens should be discarded. The failure will be
# given to the late unslicer's parent in lieu of the object normally
# returned by receiveClose.
# start/receiveChild/receiveClose/finish may raise a Violation
# exception, which tells the protocol that this object is contaminated
# and should be abandoned. An BananaFailure will be passed to its
# parent.
# Note, however, that it is not valid to both call abandonUnslicer *and*
# raise a Violation. That would discard too much.
def setConstraint(constraint):
"""Add a constraint for this unslicer. The unslicer will enforce
this constraint upon all incoming data. The constraint must be of an
appropriate type (a ListUnslicer will only accept a ListConstraint,
etc.). It must not be None. To leave us unconstrained, do not call
this method.
If this method is not called, the Unslicer will accept any valid
banana as input, which probably means there is no limit on the
number of bytes it will accept (and therefore on the memory it could
be made to consume) before it finally accepts or rejects the input.
"""
def start(count):
"""Called to initialize the new slice. The 'count' argument is the
reference id: if this object might be shared (and therefore the
target of a 'reference' token), it should call
self.protocol.setObject(count, obj) with the object being created.
If this object is not available yet (tuples), it should save a
Deferred there instead.
"""
def checkToken(typebyte, size):
"""Check to see if the given token is acceptable (does it conform to
the constraint?). It will not be asked about ABORT or CLOSE tokens,
but it *will* be asked about OPEN. It should enfore a length limit
for long tokens (STRING and LONGINT/LONGNEG types). If STRING is
acceptable, then VOCAB should be too. It should return None if the
token and the size are acceptable. Should raise Violation if the
schema indiates the token is not acceptable. Should raise
BananaError if the type byte violates the basic Banana protocol. (if
no schema is in effect, this should never raise Violation, but might
still raise BananaError).
"""
def openerCheckToken(typebyte, size, opentype):
"""'typebyte' is the type of an incoming index token. 'size' is the
value of header associated with this typebyte. 'opentype' is a list
of open tokens that we've received so far, not including the one
that this token hopes to create.
This method should ask the current opener if this index token is
acceptable, and is used in lieu of checkToken() when the receiver is
in the index phase. Usually implemented by calling
self.opener.openerCheckToken, thus delegating the question to the
RootUnslicer.
"""
def doOpen(opentype):
"""opentype is a tuple. Return None if more index tokens are
required. Check to see if this kind of child object conforms to the
constraint, raise Violation if not. Create a new Unslicer (usually
by delegating to self.parent.doOpen, up to the RootUnslicer). Set a
constraint on the child unslicer, if any.
"""
def receiveChild(childobject,
ready_deferred):
"""'childobject' is being handed to this unslicer. It may be a
primitive type (number or string), or a composite type produced by
another Unslicer. It might also be a Deferred, which indicates that
the actual object is not ready (perhaps a tuple with an element that
is not yet referenceable), in which case you should add a callback
to it that will fill in the appropriate object later. This callback
is required to return the object when it is done, so multiple such
callbacks can be chained. The childobject/ready_deferred argument
pair is taken directly from the output of receiveClose(). If
ready_deferred is non-None, you should return a dependent Deferred
from your own receiveClose method."""
def reportViolation(bf):
"""You have received an error instead of a child object. If you wish
to give up and propagate the error upwards, return the BananaFailure
object you were just given. To absorb the error and keep going with
your sequence, return None."""
def receiveClose():
"""Called when the Close token is received. Returns a tuple of
(object/referenceable-deferred, complete-deferred), or an
BananaFailure if something went wrong. There are four potential
cases::
(obj, None): the object is complete and ready to go
(d1, None): the object cannot be referenced yet, probably
because it is an immutable container, and one of its
children cannot be referenced yet. The deferred will
fire by the time the cycle has been fully deserialized,
with the object as its argument.
(obj, d2): the object can be referenced, but it is not yet
complete, probably because some component of it is
'slow' (see below). The Deferred will fire (with an
argument of None) when the object is ready to be used.
It is not guaranteed to fire by the time the enclosing
top-level object has finished deserializing.
(d1, d2): the object cannot yet be referenced, and even if it could
be, it would not yet be ready for use. Any potential users
should wait until both deferreds fire before using it.
The first deferred (d1) is guaranteed to fire before the top-most
enclosing object (a CallUnslicer, for PB methods) is closed. (if it
does not fire, that indicates a broken cycle). It is present to
handle cycles that include immutable containers, like tuples.
Mutable containers *must* return a reference to an object (even if
it is not yet ready to be used, because it contains placeholders to
tuples that have not yet been created), otherwise those cycles
cannot be broken and the object graph will not reconstructable.
The second (d2) has no such guarantees about when it will fire. It
indicates a dependence upon 'slow' external events. The first use
case for such 'slow' objects is a globally-referenceable object
which requires a new Broker connection before it can be used, so the
Deferred will not fire until a TCP connection has been established
and the first stages of PB negotiation have been completed.
If necessary, unbanana.setObject should be called, then the Deferred
created in start() should be fired with the new object."""
def finish():
"""Called when the unslicer is popped off the stack. This is called
even if the pop is because of an exception. The unslicer should
perform cleanup, including firing the Deferred with an
BananaFailure if the object it is creating could not be created.
TODO: can receiveClose and finish be merged? Or should the child
object be returned from finish() instead of receiveClose?
"""
def describe():
"""Return a short string describing where in the object tree this
unslicer is sitting, relative to its parent. These strings are
obtained from every unslicer in the stack, and joined to describe
where any problems occurred."""
def where():
"""This returns a string that describes the location of this
unslicer, starting at the root of the object tree."""

View File

@ -0,0 +1,34 @@
import sha
# here is the list of initial vocab tables. If the two ends negotiate to use
# initial-vocab-table-index N, then both sides will start with the words from
# INITIAL_VOCAB_TABLES[n] for their VOCABized tokens.
vocab_v0 = []
vocab_v1 = [ # all opentypes used in 0.0.6
"none", "boolean", "reference",
"dict", "list", "tuple", "set", "immutable-set",
"unicode", "set-vocab", "add-vocab",
"call", "arguments", "answer", "error",
"my-reference", "your-reference", "their-reference", "copyable",
# these are only used by storage.py
"instance", "module", "class", "method", "function",
# I'm not sure this one is actually used anywhere, but the first 127 of
# these are basically free.
"attrdict",
]
INITIAL_VOCAB_TABLES = { 0: vocab_v0, 1: vocab_v1 }
# to insure both sides agree on the actual words, we can hash the vocab table
# into a short string. This is included in the negotiation decision and
# compared by the receiving side.
def hashVocabTable(table_index):
data = "\x00".join(INITIAL_VOCAB_TABLES[table_index])
digest = sha.new(data).hexdigest()
return digest[:4]
def getVocabRange():
keys = INITIAL_VOCAB_TABLES.keys()
return min(keys), max(keys)

View File

@ -0,0 +1,95 @@
foolscap (0.1.2+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Fri, 13 Apr 2007 00:22:10 -0700
foolscap (0.1.2) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 12:32:46 -0700
foolscap (0.1.1+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 10:32:22 -0700
foolscap (0.1.1) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 3 Apr 2007 20:48:07 -0700
foolscap (0.1.0+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 19 Mar 2007 23:11:35 -0700
foolscap (0.1.0) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 15 Mar 2007 16:56:16 -0700
foolscap (0.0.7+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 22 Jan 2007 12:41:18 -0800
foolscap (0.0.7) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 16 Jan 2007 12:03:00 -0800
foolscap (0.0.6+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Thu, 4 Jan 2007 18:45:04 -0500
foolscap (0.0.6) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Mon, 18 Dec 2006 12:10:51 -0800
foolscap (0.0.5+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 14 Nov 2006 21:24:17 -0800
foolscap (0.0.5) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Sat, 4 Nov 2006 23:20:46 -0800
foolscap (0.0.4+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 31 Oct 2006 23:38:34 -0800
foolscap (0.0.4) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 26 Oct 2006 00:46:11 -0700
foolscap (0.0.3+) unstable; urgency=low
* new upstream release, put debian packaging in the tree
-- Brian Warner <warner@lothar.com> Tue, 10 Oct 2006 19:16:13 -0700
foolscap (0.0.2) unstable; urgency=low
* New upstream release of an experimental package
-- Brian Warner <warner@lothar.com> Thu, 27 Jul 2006 17:40:15 -0700

View File

@ -0,0 +1 @@
4

View File

@ -0,0 +1,21 @@
Source: foolscap
Section: python
Priority: optional
Maintainer: Brian Warner <warner@lothar.com>
Build-Depends: debhelper (>> 4.1.68), python2.4-dev, python2.4-twisted, cdbs
Standards-Version: 3.7.2
Package: python-foolscap
Architecture: all
Depends: python (>= 2.4), python (<< 2.5), python2.4-foolscap
Description: An object-capability -based RPC system for Twisted Python
This is a dummy package that only depends on python2.4-foolscap
Package: python2.4-foolscap
Architecture: all
Depends: python2.4, python2.4-twisted-core
Recommends: python2.4-twisted-names, python2.4-pyopenssl
Description: An object-capability -based RPC system for Twisted Python
Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining
capability-based security, secure references, flexible serialization, and
technology to mitigate resource-consumption attacks.

View File

@ -0,0 +1,30 @@
This package was debianized by Brian Warner <warner@twistedmatrix.com>
It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/
Copyright (c) 2006
Brian Warner
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Copyright Exceptions:
No exceptions are listed in the upstream source.

View File

@ -0,0 +1,61 @@
#!/usr/bin/make -f
# Sample debian/rules that uses debhelper.
# GNU copyright 1997 to 1999 by Joey Hess.
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
# This is the debhelper compatability version to use.
export DH_COMPAT=4
build: build-stamp
build-stamp:
dh_testdir
## Build for all python versions
python2.4 setup.py build
touch build-stamp
clean:
dh_testdir
dh_testroot
rm -f build-stamp
rm -rf build
find . -name '*.pyc' |xargs -r rm
dh_clean
install: build
dh_testdir
dh_testroot
dh_clean -k
dh_installdirs
## Python 2.4
python2.4 setup.py build
python2.4 setup.py install --prefix=debian/python2.4-foolscap/usr
# Build architecture-independent files here.
binary-indep: build install
dh_testdir
dh_testroot
dh_installdocs -i -A NEWS README
dh_installdocs ChangeLog doc/newpb-jobs.txt doc/newpb-todo.txt doc/use-cases.txt doc/using-pb.xhtml doc/copyable.xhtml doc/listings doc/specifications
dh_installchangelogs -i
dh_compress -i -X.py
dh_fixperms
dh_python
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
binary-arch:
# nothing to do
binary: binary-indep
.PHONY: build clean binary-indep binary-arch binary install

View File

@ -0,0 +1,2 @@
version=3
http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/

View File

@ -0,0 +1,95 @@
foolscap (0.1.2+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Fri, 13 Apr 2007 00:22:10 -0700
foolscap (0.1.2) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 12:32:46 -0700
foolscap (0.1.1+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 10:32:22 -0700
foolscap (0.1.1) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 3 Apr 2007 20:48:07 -0700
foolscap (0.1.0+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 19 Mar 2007 23:11:35 -0700
foolscap (0.1.0) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 15 Mar 2007 16:56:16 -0700
foolscap (0.0.7+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 22 Jan 2007 12:41:18 -0800
foolscap (0.0.7) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 16 Jan 2007 12:03:00 -0800
foolscap (0.0.6+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Thu, 4 Jan 2007 18:45:04 -0500
foolscap (0.0.6) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Mon, 18 Dec 2006 12:10:51 -0800
foolscap (0.0.5+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 14 Nov 2006 21:24:17 -0800
foolscap (0.0.5) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Sat, 4 Nov 2006 23:20:46 -0800
foolscap (0.0.4+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 31 Oct 2006 23:38:34 -0800
foolscap (0.0.4) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 26 Oct 2006 00:46:30 -0700
foolscap (0.0.3+) unstable; urgency=low
* new upstream release, put debian packaging in the tree
-- Brian Warner <warner@lothar.com> Tue, 10 Oct 2006 19:16:13 -0700
foolscap (0.0.2) unstable; urgency=low
* New upstream release of an experimental package
-- Brian Warner <warner@lothar.com> Thu, 27 Jul 2006 17:40:15 -0700

View File

@ -0,0 +1 @@
5

View File

@ -0,0 +1,17 @@
Source: foolscap
Section: python
Priority: optional
Maintainer: Brian Warner <warner@lothar.com>
Build-Depends: debhelper (>= 5.0.37.3), cdbs (>= 0.4.43), python-central (>= 0.5), python-all-dev, python-twisted-core
XS-Python-Version: all
Standards-Version: 3.7.2
Package: python-foolscap
Architecture: all
Depends: ${python:Depends}, python-twisted-core
Recommends: python-twisted-names, python-pyopenssl
XB-Python-Version: ${python:Versions}
Description: An object-capability -based RPC system for Twisted Python
Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining
capability-based security, secure references, flexible serialization, and
technology to mitigate resource-consumption attacks.

View File

@ -0,0 +1,30 @@
This package was debianized by Brian Warner <warner@twistedmatrix.com>
It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/
Copyright (c) 2006
Brian Warner
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Copyright Exceptions:
No exceptions are listed in the upstream source.

View File

@ -0,0 +1 @@
2

View File

@ -0,0 +1,15 @@
#! /usr/bin/make -f
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
DEB_PYTHON_SYSTEM=pycentral
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
install/python-foolscap::
dh_installdocs doc/newpb-jobs.txt doc/newpb-todo.txt doc/use-cases.txt doc/using-pb.xhtml doc/copyable.xhtml doc/listings doc/specifications
clean::
-rm -rf build

View File

@ -0,0 +1,2 @@
version=3
http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/

View File

@ -0,0 +1,95 @@
foolscap (0.1.2+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Fri, 13 Apr 2007 00:22:10 -0700
foolscap (0.1.2) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 12:32:46 -0700
foolscap (0.1.1+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 10:32:22 -0700
foolscap (0.1.1) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 3 Apr 2007 20:48:07 -0700
foolscap (0.1.0+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 19 Mar 2007 23:11:35 -0700
foolscap (0.1.0) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 15 Mar 2007 16:56:16 -0700
foolscap (0.0.7+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 22 Jan 2007 12:41:18 -0800
foolscap (0.0.7) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 16 Jan 2007 12:03:00 -0800
foolscap (0.0.6+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Thu, 4 Jan 2007 18:45:04 -0500
foolscap (0.0.6) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Mon, 18 Dec 2006 12:10:51 -0800
foolscap (0.0.5+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 14 Nov 2006 21:24:17 -0800
foolscap (0.0.5) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Sat, 4 Nov 2006 23:20:46 -0800
foolscap (0.0.4+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 31 Oct 2006 23:38:34 -0800
foolscap (0.0.4) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 26 Oct 2006 00:46:30 -0700
foolscap (0.0.3+) unstable; urgency=low
* new upstream release, put debian packaging in the tree
-- Brian Warner <warner@lothar.com> Tue, 10 Oct 2006 19:16:13 -0700
foolscap (0.0.2) unstable; urgency=low
* New upstream release of an experimental package
-- Brian Warner <warner@lothar.com> Thu, 27 Jul 2006 17:40:15 -0700

View File

@ -0,0 +1 @@
5

View File

@ -0,0 +1,17 @@
Source: foolscap
Section: python
Priority: optional
Maintainer: Brian Warner <warner@lothar.com>
Build-Depends: debhelper (>= 5.0.38), cdbs (>= 0.4.43), python-central (>= 0.5), python-all-dev, python-twisted-core
XS-Python-Version: all
Standards-Version: 3.7.2
Package: python-foolscap
Architecture: all
Depends: ${python:Depends}, python-twisted-core
Recommends: python-twisted-names, python-pyopenssl
XB-Python-Version: ${python:Versions}
Description: An object-capability -based RPC system for Twisted Python
Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining
capability-based security, secure references, flexible serialization, and
technology to mitigate resource-consumption attacks.

View File

@ -0,0 +1,30 @@
This package was debianized by Brian Warner <warner@twistedmatrix.com>
It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/
Copyright (c) 2006
Brian Warner
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Copyright Exceptions:
No exceptions are listed in the upstream source.

View File

@ -0,0 +1 @@
2

View File

@ -0,0 +1,15 @@
#! /usr/bin/make -f
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
DEB_PYTHON_SYSTEM=pycentral
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
install/python-foolscap::
dh_installdocs doc/newpb-jobs.txt doc/newpb-todo.txt doc/use-cases.txt doc/using-pb.xhtml doc/copyable.xhtml doc/listings doc/specifications
clean::
-rm -rf build

View File

@ -0,0 +1,2 @@
version=3
http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/

View File

@ -0,0 +1,95 @@
foolscap (0.1.2+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Fri, 13 Apr 2007 00:22:10 -0700
foolscap (0.1.2) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 12:32:46 -0700
foolscap (0.1.1+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Wed, 4 Apr 2007 10:32:22 -0700
foolscap (0.1.1) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 3 Apr 2007 20:48:07 -0700
foolscap (0.1.0+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 19 Mar 2007 23:11:35 -0700
foolscap (0.1.0) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 15 Mar 2007 16:56:16 -0700
foolscap (0.0.7+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Mon, 22 Jan 2007 12:41:18 -0800
foolscap (0.0.7) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Tue, 16 Jan 2007 12:03:00 -0800
foolscap (0.0.6+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Thu, 4 Jan 2007 18:45:04 -0500
foolscap (0.0.6) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Mon, 18 Dec 2006 12:10:51 -0800
foolscap (0.0.5+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 14 Nov 2006 21:24:17 -0800
foolscap (0.0.5) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Sat, 4 Nov 2006 23:20:46 -0800
foolscap (0.0.4+) unstable; urgency=low
* bump revision while between releases
-- Brian Warner <warner@lothar.com> Tue, 31 Oct 2006 23:38:34 -0800
foolscap (0.0.4) unstable; urgency=low
* new release
-- Brian Warner <warner@lothar.com> Thu, 26 Oct 2006 00:46:11 -0700
foolscap (0.0.3+) unstable; urgency=low
* new upstream release, put debian packaging in the tree
-- Brian Warner <warner@lothar.com> Tue, 10 Oct 2006 19:16:13 -0700
foolscap (0.0.2) unstable; urgency=low
* New upstream release of an experimental package
-- Brian Warner <warner@lothar.com> Thu, 27 Jul 2006 17:40:15 -0700

View File

@ -0,0 +1 @@
4

View File

@ -0,0 +1,21 @@
Source: foolscap
Section: python
Priority: optional
Maintainer: Brian Warner <warner@lothar.com>
Build-Depends: debhelper (>> 4.1.68), python2.4-dev, python2.4-twisted, cdbs
Standards-Version: 3.7.2
Package: python-foolscap
Architecture: all
Depends: python (>= 2.4), python (<< 2.5), python2.4-foolscap
Description: An object-capability -based RPC system for Twisted Python
This is a dummy package that only depends on python2.4-foolscap
Package: python2.4-foolscap
Architecture: all
Depends: python2.4, python2.4-twisted
Recommends: python2.4-pyopenssl
Description: An object-capability -based RPC system for Twisted Python
Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining
capability-based security, secure references, flexible serialization, and
technology to mitigate resource-consumption attacks.

Some files were not shown because too many files have changed in this diff Show More