Once we implement #249 (move bundled dependencies out of revision control history and make them optional) then we can stop doing this. Until then we try not to do it unless we really need to...
The 0.9.18 version of Nevow doesn't declare its dependency on Twisted in a machine-readable way ( http://divmod.org/trac/ticket/2629 ). Neither does the current release of Nevow (0.9.31), but hopefully a future release of Nevow in the near future will fix this.
Also, we're going to be managing external dependencies like this in a separate darcs repository in the future instead of checking them into our Tahoe source tree.
pycryptopp-0.3.0 incorrectly crypts AES (depending on compiler and version of Crypto++ used)
Very soon now we're going to set up an "ext" repository to hold all tarballs and no longer check them into our trunk source tree.
this adds munin graphs to present data already published by nodes to
the stats_gatherer, namely mutable files published/retrieved, and
immutable files uploaded, and the bytes thereof
I'd implemented stats gathering hooks in the helper a while back.
Brian did the same without reference to my changes. This reconciles
those two changes, encompassing all the stats in both changes,
implemented through the stats_provider interface.
this also provide templates for all 10 helper graphs in the
tahoe-stats munin plugin.
having changed tahoe-stats to not report data series if there was no recent
data recorded for a node, I wound up making it hide the data series. this
change causes it to report all data series for which stats exist in the
'config' phase, so that they show up, but only report actual data if the
stats are recent, so that they show up as missing if the node is not
reporting stats currently
if a node fails to report stats, the natural thing to do in re munin is to
supress the data for that data series. the previous tahoe-stats would output
whatever data was present in the stats_gatherer's stats.pickle, regardless of
how old.
this change means that if the gatherer hasn't received data within the last
5 min, then no data is reported to munin for that node.
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
Now we use "./setup.py develop" to ensure that changes to our source code are immediately used without requiring a "make" step. This simplification will hopefully pave the way for easier py2exe and py2app, solving the "Unit tests test the installed version" bug (#145), and perhaps also #164 and #176.
This patch also conditionalizes the use of setuptools_darcs on the absence of a PKG-INFO file, which is part of fixing #263.
this resolves problems of py2exe's modulefinder collection of sources from
.zipped egg files, not by using easy_install to reach the --always-unzip
option, but rather with a small tool which unpacks any zipped egg files found
in misc/dependencies. this fixes the py2exe build given rollback of the
easy_install stuff which had broken the unix builds. misc/hatch-eggs.py
performs the honours.
this also includes a misc/sub-ver.py tool which substitutes elements of the
verion number for the current code base (by importing allmydata.__version__
hence make-version should be run first, and the python path carefully managed)
into template files using python's string interpolation of named args from a
dict as the templating syntax. i.e. %(major)d %(minor)d %(point)d %(nano)d
each expand to the individual components of the version number as codified
by the pyutil.version_class.Version class. there is also a %(build)s tag
which expands to the string form of the whole version number. This tool is
used to interpolate the automatically generated version information into the
innosetup source file in a form consistent with innosetup/windows' restrictions
py2exe is unable to handle .eggs which are packaged as zip files
in preference it will pull in other versions of libraries if they
can be found in the environment.
this changes causes .eggs to be built as .egg directories, which
py2exe can handle.
This is necessary, as we can't prevent setuptools from respecting any such eggs, therefore we need to respect them in order to maintain consistency. However, we don't normally install any "install_requires" eggs into the source tree root dir.
It isn't actually needed for Tahoe, only for the command-line tools from pyutil. Later I will make an "extras" category within pyutil to specify in a machine-readable way that argparse is not required for pyutil unless you want the command-line tools.
misc/make-version.py has a limitation which prevents it from generating version
stamps from our current darcs history. This limitation has been fixed in
pyutil's "darcsver". Rather than copy the fix from there to
misc/make-version.py, I'm making it so that you have to install pyutil if you
want to automatically generate _version.py files from the current darcs
history.
* use new decentralized directories everywhere instead of old centralized directories
* provide UI to them through the web server
* provide UI to them through the CLI
* update unit tests to simulate decentralized mutable directories in order to test other components that rely on them
* remove the notion of a "vdrive server" and a client thereof
* remove the notion of a "public vdrive", which was a directory that was centrally published/subscribed automatically by the tahoe node (you can accomplish this manually by making a directory and posting the URL to it on your web site, for example)
* add a notion of "wait_for_numpeers" when you need to publish data to peers, which is how many peers should be attached before you start. The default is 1.
* add __repr__ for filesystem nodes (note: these reprs contain a few bits of the secret key!)
* fix a few bugs where we used to equate "mutable" with "not read-only". Nowadays all directories are mutable, but some might be read-only (to you).
* fix a few bugs where code wasn't aware of the new general-purpose metadata dict the comes with each filesystem edge
* sundry fixes to unit tests to adjust to the new directories, e.g. don't assume that every share on disk belongs to a chk file.
because the setuptools "entry points" form asserts that there are
setuptools-visible packages like nevow/zope.interface (i.e. they have .egg-info
metadata). Until very recently, most debian systems did not install this
metadata. Instead, we rely upon the usual debian dependency checking as
expressed in debian/control .