Previously matched any single character from `abc|r` (with duplicate
specification of `c`). Now matches any single character from `abc` or
the two character sequence `rc`.
I guess this was the intent, anyway.
This is all minor stuff: unreachable debug code (that should be commented-out
instead of in an 'if False:' block), unnecessary 'pass' and 'global'
statements, redundantly-initialized variables. No behavior changes. Nothing
here was actually broken, it just looked suspicious to the static analysis at
https://lgtm.com/projects/g/tahoe-lafs/tahoe-lafs/alerts/?mode=list .
get_listener() is allowed to return either, and the Tor provider is currently
simple enough to not really need more than a basic descriptor, but have it
return a full Endpoint for use as an example of what I2P can do later.
This delegates the construction of the server Endpoint object to the i2p/tor
Provider, which can use the i2p/tor section of the config file to add options
which would be awkward to express as text in an endpoint descriptor string.
refs ticket:2889 (but note this merely makes room for a function to be
written that can process I2CP options, it does not actually handle such
options, so it does not close this ticket yet)
It looks like str() was meant to truncate the dict, but a missing i+=1 meant
that it never actually did. I also changed the format to include a clear
"..." in case we truncate it, to avoid confusion with a non-truncated dict of
the same size.
This also improves test coverage in subtract() and
NumDict.item_with_largest_value().
refs ticket:2891
Squashed all commits that were meejah's between
30d68fb499f300a393fa0ced5980229f4bb6efda
and
33c268ed3a8c63a809f4403e307ecc13d848b1ab
On the branch meejah:1382.markberger-rewrite-rebase.6 as
per review
unit-test for happiness calculation
unused function
put old servers_of_happiness() calculation back for now
test for calculate_happiness
remove some redundant functions
Add comments 10 and 8 from the servers of happiness spec
Fix bug in _filter_g3 for servers of happiness
Remove usage of HappinessUpload class
here we modifying the PeerSelector class.
we make sure to correctly calculate the happiness value
by ignoring keys who's value are None...
Remove HappinessUpload and tests
Replace helper servers_of_happiness
we replace it's previous implementation with a new
wrapper function that uses share_placement
If no session management is performed, txi2p starts a process-wide session the
first time a connection (client or server) is opened; all subsequent connections
use that session and its configuration properties.
This commit ensures that the same properties are passed to both client and
server endpoints, so that the correct I2P Destination is started regardless of
whether the first connection made by Tahoe-LAFS is for a client or server.
Closes#2858.
* replace "last_details" with "non_connected_statuses" dict
* rename "last_connection_summary" to just "summary"
* for connected servers, show other hints in a tooltip
* for not-yet-connected servers, show all hints in a list
* build the list (in STAN) on the server side, not using IContainer
This uses a unix-domain control port, and includes test coverage.
create_onion() displays pacifier messages, since the allocate-onion step
takes around 35 seconds
which uses SHA1 to combine the file's storage index (known as "peer
selection index" in this context) and each server's "server permutation
seed". This is the only thing in tahoe that uses SHA1.
With this change, we stop importing sha1 from anywhere else.
(instead of using a copy). Foolscap-0.12.3 fixes a problem with
allocate_tcp_port() that was causing intermittent test failures. I think
it makes more sense to use Foolscap's copy (and fixes) than to keep
re-copying it into Tahoe each time it changes.
If/when we manage to stop depending upon foolscap for server RPC, we can
re-copy this back into tahoe's source tree.
refs ticket:2795
* uses @inlineCallbacks to turn the _lazy_tail recursion into
a "real" looking loop;
* remove the need for "immediate" vs delayed iteration of said loop;
* make it easier for the unit-tests to control the behavior of the
uploader/downloader;
* consolidates (some) setup/teardown code into the setUp and tearDown
hooks provided by unittest so unit-tests aren't doing that themselves
* re-factors some of the unit-tests to use an @inlineCallbacks style
so they're easier to follow and debug
This doesn't tackle the "how to know when our inotify events have arrived"
problem the unit-tests still have, nor does it eliminate the myriad bits
of state that get added to tests via all the MixIns.
The yaml.SafeLoader.add_constructor() should probably only be done once,
and moving this all into a module gives us an opportunity to test it
directly.
The old copy had a bug which occasionally returns a port that was
actually in use, causing intermittent test failures (when large numbers
of ports were allocated). I finally figured out how to fix it in
Foolscap, so this is just a copy of the updated function.
closes ticket:2795
The udpprot.transport.connect() fails if we don't have a network
connection, but the port is still listening, so trial gives us a
DirtyReactorError. The fix is a "finally:" which does
port.stopListening() even in this case.
Closes ticket:2769
As discussed at https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1973 and in
previous pull request #129.
- replace lengthy timestamps with human-readable deltas (eg 1h 2m 3s)
- replace "announced" column with "Last RX" column
- remove service column (it always said the same thing, "storage")
- fix colspan on 'You are not presently connected' message
Previous versions, some with github comments: 3fe9053134 , 486dbfc7bd , and c89ea62580, 9fabb92486, bbd8b42a25
Unlike previous attempts, the tests on this one should pass in any timezone.
(But like current master, will fail with Nevow >=0.12...)
Thanks to an anonymous contributor who wrote some of the tests.
As discussed at https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1973 and in
previous pull request #129.
- replace lengthy timestamps with human-readable deltas (eg 1h 2m 3s)
- replace "announced" column with "Last RX" column
- remove service column (it always said the same thing, "storage")
- fix colspan on 'You are not presently connected' message
Previous versions, some with github comments: 3fe9053134 , 486dbfc7bd , and c89ea62580, 9fabb92486, bbd8b42a25
Unlike previous attempts, the tests on this one should pass in any timezone.
(But like current master, will fail with Nevow >=0.12...)
Thanks to an anonymous contributor who wrote some of the tests.
A long time ago, the introducer's status web page would show the
advertised IP addresses for all published services, by parsing their
FURL's connection hints. This hasn't worked since about 12-Aug-2014 when
foolscap-0.6.5 changed the internal format of these hints (the column
has been empty this whole time).
This removes the "Advertised IPs" column from the Service Announcements
table. Instead, the service's full connection hints (not just the IP
address) is displayed in a tooltip/popup on the "Announced" timestamp
column.
The code that pulls these connection hints is now tolerant of all three
foolscap styles:
* foolscap<=0.6.4 : tuples of ("ipv4",host,port)
* 0.6.5 .. 0.8.0 : tuples of ("tcp",host,port)
* foolscap>=0.9.0 : strings
fixes ticket:2510
A long time ago, the introducer's status web page would show the
advertised IP addresses for all subscribers, by parsing their
RemoteReference's FURL's connection hints. This hasn't worked since
about 12-Aug-2014 when foolscap-0.6.5 changed the internal format of
these hints.
This removes the feature: we no longer attempt to show advertised IP
addresses of subscribed clients. It also removes the code that looked
inside foolscap internals for this information.
The stdlib 'subprocess' module in python-2.7.4 through 2.7.7 suffers
from http://bugs.python.org/issue18851 which causes unrelated file
descriptors to be closed when `subprocess.call()` fails the `exec()`,
such as when the executable being invoked does not actually exist. There
appears to be some randomness involved. This was fixed in python-2.7.8.
Tahoe's iputil.py uses subprocess.call on many different "ifconfig"-type
executables, most of which don't exist on any given platform (added in
git commit 8e31d66cd0). This results in a lot of file-descriptor
closing, which (at least during unit tests) tends to clobber important
things like Tub TCP sockets. This seems to be the root cause behind
ticket:2121, in which normal code tries to close already-closed sockets,
crashing the unit tests. Since different platforms have different
ifconfigs, some platforms will experience more failed execs than others,
so this bug could easily behave differently on linux vs freebsd, as well
as working normally on python-2.7.8 or 2.7.4.
This patch inserts a guard to make sure that os.path.isfile() is true
before allowing Popen.call() to try executing the target. This ought to
be enough to avoid the bug. It changes both iputil.py and
allmydata.__init__ (which uses Popen for calling "lsb_release"), which
are all the places where 'subprocess' is used outside of unit tests.
Other potential fixes: use the 'subprocess32' module from PyPI (which is
a bug-free backport of the Python3 stdlib subprocess module, but would
introduce a new dependency), or require python >= 2.7.8 (but this would
rule out development/deployment on the current OS-X 10.9 release, which
ships with 2.7.5, as well as other distributions like Ubuntu 14.04 LTS).
I believe this closes ticket:2121, and given the apparent relationship
between 2121 and 2023, I think it also closes ticket:2023 (although
since 2023 doesn't have copies of the failing log files, it's hard to
tell). I'm hoping that this will tide us over until 1.11 is released, at
which point we can execute on the plan to remove iputil.py entirely by
changing the way that nodes learn their externally-facing IP address.
I want mode="w" (i.e. text, with newline conversion) for code that
writes newline-terminated strings (which should also be human readable)
to files. I like to use things like "cat .tahoe/permutation-seed"
without seeing the seed jammed together with the next command prompt.
This also simplifies how case-insensitivity is handled, and fixes a corner case
where the wrong exception was raised when the size ends in "BB".
fixes#1812
Signed-off-by: David-Sarah Hopwood <davidsarah@mint>
Unlike set.union(), which returns a new set, DictOfSets.union() modified
the DictOfSets in-place. The name collision bit me when I changed some
code from using DictOfSets to a normal set, and expected that
set.union() would modify the set in-place. Since there was only one user
of DictOfSets.union, I figured it was safer to just get rid of it.
Previously, test_runner sometimes fails because the _node_has_started()
poller fires after the portnum file has been opened, but before it has
actually been filled, allowing the test process to observe an empty file,
which flunks the test.
This adds a new fileutil.write_atomically() function (using the usual
write-to-.tmp-then-rename approach), and uses it for both node.url and
client.port . These files are written a bit before the node is really up and
running, but they're late enough for test_runner's purposes, which is to know
when it's safe to read client.port and use 'tahoe restart' (and therefore
SIGINT) to restart the node.
The current node/client code doesn't offer any better "are you really done
with startup" indicator.. the ideal approach would be to either watch the
logfile, or connect to its flogport, but both are a hassle. Changing the node
to write out a new "all done" file would be intrusive for regular
operations.
This significantly cleans up the IntroducerServer web-status renderers.
Instead of poking around in the introducer's internals, now the web-status
renderers get clean AnnouncementDescriptor and SubscriberDescriptor
objects. They are still somewhat foolscap-centric, but will provide a clean
abstraction boundary for future improvements.
The specific #1721 bug was that old (V1) subscribers were handled by
wrapping their RemoteReference in a special WrapV1SubscriberInV2Interface
object, but the web-status display was trying to peek inside the object to
learn what host+port it was associated with, and the wrapper did not proxy
those extra attributes.
A test was added to test_introducer to make sure the introweb page renders
properly and at least contains the nicknames of both the V1 and V2 clients.
This replaces the setup.cfg aliases that run "darcsver" before each major
command with the new "update_version". update_version is defined in setup.py,
and tries to get a version string from either darcs or git (or leaves the
existing _version.py alone if neither VC metadata is available).
Also clean up a tiny typo in verlib.py that messed up syntax hilighting.
No behavioral changes, just updating variable/method names and log messages.
The effects outside these three files should be minimal: some exception
messages changed (to say "server" instead of "peer"), and some internal class
names were changed. A few things still use "peer" to minimize external
changes, like UploadResults.timings["peer_selection"] and
happinessutil.merge_peers, which can be changed later.
The Range header causes n.read() to be called with an offset= of type 'long',
which eventually got used in a Spans/DataSpans object's __len__ method.
Apparently python doesn't permit __len__() to return longs, only ints.
Rewrote Spans/DataSpans to use s.len() instead of len(s) aka s.__len__() .
Added a test in test_download. Note that test_web didn't catch this because
it uses mock FileNodes for speed: it's probably time to rewrite that.
There is still an unresolved error-recovery problem in #1154, so I'm not
closing the ticket quite yet.
This bug had the effect of making uploads sometimes (rarely) appear to succeed when they had actually not distributed the shares well enough to achieve the desired servers-of-happiness level.
- Make some important utility functions clearer and more thoroughly
documented.
- Assert in upload.servers_of_happiness that the buckets attributes
of PeerTrackers passed to it are mutually disjoint.
- Get rid of some silly non-Pythonisms that I didn't see when I first
wrote these patches.
- Make sure that should_add_server returns true when queried about a
shnum that it doesn't know about yet.
- Change Tahoe2PeerSelector.preexisting_shares to map a shareid to a set
of peerids, alter dependencies to deal with that.
- Remove upload.should_add_servers, because it is no longer necessary
- Move upload.shares_of_happiness and upload.shares_by_server to a utility
file.
- Change some points in Tahoe2PeerSelector.
- Compute servers_of_happiness using a bipartite matching algorithm that
we know is optimal instead of an ad-hoc greedy algorithm that isn't.
- Change servers_of_happiness to just take a sharemap as an argument,
change its callers to merge existing_shares and used_peers before
calling it.
- Change an error message in the encoder to be more appropriate for
servers of happiness.
- Clarify the wording of an error message in immutable/upload.py
- Refactor a happiness failure message to happinessutil.py, and make
immutable/upload.py and immutable/encode.py use it.
- Move the word "only" as far to the right as possible in failure
messages.
- Use a better definition of progress during peer selection.
- Do read-only peer share detection queries in parallel, not sequentially.
- Clean up logging semantics; print the query statistics whenever an
upload is unsuccessful, not just in one case.
allmydata.util.log.err() either takes a Failure as the first positional
argument, or takes no positional arguments and must be invoked in an
exception handler. Fixed its signature to match both foolscap.logging.log.err
and twisted.python.log.err . Included a brief unit test.
Stop checking separately for ConnectionDone/ConnectionLost, since those have
been folded into DeadReferenceError since foolscap-0.3.1 . Write
rrefutil.trap_deadref() in terms of rrefutil.trap_and_discard() to improve
code coverage.
* remove Downloader.download_to_data/download_to_filename/download_to_filehandle
* remove download.Data/FileName/FileHandle targets
* remove filenode.download/download_to_data/download_to_filename methods
* leave Downloader.download (the whole Downloader will go away eventually)
* add util.consumer.MemoryConsumer/download_to_data, for convenience
(this is mostly used by unit tests, but it gets used by enough non-test
code to warrant putting it in allmydata.util)
* update tests
* removes about 180 lines of code. Yay negative code days!
Overall plan is to rewrite immutable/download.py and leave filenode.read() as
the sole read-side API.
* backups now share dirnodes with any previous backup, in any location,
so renames and moves are handled very efficiently
* "tahoe backup" no longer bothers reading the previous snapshot
* if you switch grids, you should delete ~/.tahoe/private/backupdb.sqlite,
to force new uploads of all files and directories
We need to carefully document the licence of figleaf in order to get Tahoe-LAFS into Ubuntu Karmic Koala. However, figleaf isn't really a part of Tahoe-LAFS per se -- this is just a "convenience copy" of a development tool. The quickest way to make Tahoe-LAFS acceptable for Karmic then, is to remove figleaf from the Tahoe-LAFS tarball itself. People who want to run figleaf on Tahoe-LAFS (as everyone should want) can install figleaf themselves. I haven't tested this -- there may be incompatibilities between upstream figleaf and the copy that we had here...