if a node fails to report stats, the natural thing to do in re munin is to
supress the data for that data series. the previous tahoe-stats would output
whatever data was present in the stats_gatherer's stats.pickle, regardless of
how old.
this change means that if the gatherer hasn't received data within the last
5 min, then no data is reported to munin for that node.
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
Now we use "./setup.py develop" to ensure that changes to our source code are immediately used without requiring a "make" step. This simplification will hopefully pave the way for easier py2exe and py2app, solving the "Unit tests test the installed version" bug (#145), and perhaps also #164 and #176.
This patch also conditionalizes the use of setuptools_darcs on the absence of a PKG-INFO file, which is part of fixing #263.
this resolves problems of py2exe's modulefinder collection of sources from
.zipped egg files, not by using easy_install to reach the --always-unzip
option, but rather with a small tool which unpacks any zipped egg files found
in misc/dependencies. this fixes the py2exe build given rollback of the
easy_install stuff which had broken the unix builds. misc/hatch-eggs.py
performs the honours.
this also includes a misc/sub-ver.py tool which substitutes elements of the
verion number for the current code base (by importing allmydata.__version__
hence make-version should be run first, and the python path carefully managed)
into template files using python's string interpolation of named args from a
dict as the templating syntax. i.e. %(major)d %(minor)d %(point)d %(nano)d
each expand to the individual components of the version number as codified
by the pyutil.version_class.Version class. there is also a %(build)s tag
which expands to the string form of the whole version number. This tool is
used to interpolate the automatically generated version information into the
innosetup source file in a form consistent with innosetup/windows' restrictions
py2exe is unable to handle .eggs which are packaged as zip files
in preference it will pull in other versions of libraries if they
can be found in the environment.
this changes causes .eggs to be built as .egg directories, which
py2exe can handle.
This is necessary, as we can't prevent setuptools from respecting any such eggs, therefore we need to respect them in order to maintain consistency. However, we don't normally install any "install_requires" eggs into the source tree root dir.
It isn't actually needed for Tahoe, only for the command-line tools from pyutil. Later I will make an "extras" category within pyutil to specify in a machine-readable way that argparse is not required for pyutil unless you want the command-line tools.
misc/make-version.py has a limitation which prevents it from generating version
stamps from our current darcs history. This limitation has been fixed in
pyutil's "darcsver". Rather than copy the fix from there to
misc/make-version.py, I'm making it so that you have to install pyutil if you
want to automatically generate _version.py files from the current darcs
history.
* use new decentralized directories everywhere instead of old centralized directories
* provide UI to them through the web server
* provide UI to them through the CLI
* update unit tests to simulate decentralized mutable directories in order to test other components that rely on them
* remove the notion of a "vdrive server" and a client thereof
* remove the notion of a "public vdrive", which was a directory that was centrally published/subscribed automatically by the tahoe node (you can accomplish this manually by making a directory and posting the URL to it on your web site, for example)
* add a notion of "wait_for_numpeers" when you need to publish data to peers, which is how many peers should be attached before you start. The default is 1.
* add __repr__ for filesystem nodes (note: these reprs contain a few bits of the secret key!)
* fix a few bugs where we used to equate "mutable" with "not read-only". Nowadays all directories are mutable, but some might be read-only (to you).
* fix a few bugs where code wasn't aware of the new general-purpose metadata dict the comes with each filesystem edge
* sundry fixes to unit tests to adjust to the new directories, e.g. don't assume that every share on disk belongs to a chk file.
because the setuptools "entry points" form asserts that there are
setuptools-visible packages like nevow/zope.interface (i.e. they have .egg-info
metadata). Until very recently, most debian systems did not install this
metadata. Instead, we rely upon the usual debian dependency checking as
expressed in debian/control .
Some other people might use the official Windows build of darcs, and people who use my cygwin wrapper for darcs will be compatible with this patch as long as they use the latest version of the wrapper.
This is a potentially disruptive and potentially ugly change to the code base,
because I renamed the object that serves in both roles from "Queen" to
"IntroducerAndVdrive", which is a bit of an ugly name.
However, I think that clarity is important enough in this release to make this
change. All unit tests pass. I'm now darcs recording this patch in order to
pull it to other machines for more testing.
Note that using "whatever version of python the name 'python' maps to in the current shell environment" is more error-prone that specifying which python you mean, such as by executing "/usr/bin/python setup.py" instead of executing "./setup.py". When you build tahoe (by running "make") it will make a copy of bin/allmydata-tahoe in instdir/bin/allmydata-tahoe with the shebang line rewritten to execute the specific version of python that was used when building instead of to execute "/usr/bin/env python".
However, it seems better that the default for lazy people be "whatever 'python' means currently" instead of "whatever 'python' meant to the manufacturer of your operating system".