2013-11-08 20:31:08 +00:00
|
|
|
.. -*- coding: utf-8-with-signature -*-
|
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
=============
|
|
|
|
Tahoe Logging
|
|
|
|
=============
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
1. `Overview`_
|
|
|
|
2. `Realtime Logging`_
|
|
|
|
3. `Incidents`_
|
|
|
|
4. `Working with flogfiles`_
|
|
|
|
5. `Gatherers`_
|
2010-04-24 11:43:16 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
1. `Incident Gatherer`_
|
|
|
|
2. `Log Gatherer`_
|
|
|
|
|
2016-03-30 07:55:21 +00:00
|
|
|
6. `Adding log messages`_
|
|
|
|
7. `Log Messages During Unit Tests`_
|
2010-10-15 05:29:13 +00:00
|
|
|
|
|
|
|
Overview
|
|
|
|
========
|
2010-04-24 11:43:16 +00:00
|
|
|
|
2008-09-04 00:25:31 +00:00
|
|
|
Tahoe uses the Foolscap logging mechanism (known as the "flog" subsystem) to
|
|
|
|
record information about what is happening inside the Tahoe node. This is
|
|
|
|
primarily for use by programmers and grid operators who want to find out what
|
|
|
|
went wrong.
|
|
|
|
|
2012-03-31 22:39:34 +00:00
|
|
|
The Foolscap logging system is documented at
|
2016-03-30 07:55:21 +00:00
|
|
|
`<https://github.com/warner/foolscap/blob/latest-release/doc/logging.rst>`__.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2012-03-31 22:39:34 +00:00
|
|
|
The Foolscap distribution includes a utility named "``flogtool``" that is
|
2016-03-30 07:55:21 +00:00
|
|
|
used to get access to many Foolscap logging features. ``flogtool`` should get
|
|
|
|
installed into the same virtualenv as the ``tahoe`` command.
|
2012-03-31 22:39:34 +00:00
|
|
|
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Realtime Logging
|
|
|
|
================
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
When you are working on Tahoe code, and want to see what the node is doing,
|
2010-12-27 05:05:33 +00:00
|
|
|
the easiest tool to use is "``flogtool tail``". This connects to the Tahoe
|
|
|
|
node and subscribes to hear about all log events. These events are then
|
|
|
|
displayed to stdout, and optionally saved to a file.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
"``flogtool tail``" connects to the "logport", for which the FURL is stored
|
|
|
|
in ``BASEDIR/private/logport.furl`` . The following command will connect to
|
|
|
|
this port and start emitting log information::
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
flogtool tail BASEDIR/private/logport.furl
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
The ``--save-to FILENAME`` option will save all received events to a file,
|
2011-08-19 04:41:03 +00:00
|
|
|
where then can be examined later with "``flogtool dump``" or "``flogtool
|
|
|
|
web-viewer``". The ``--catch-up`` option will ask the node to dump all stored
|
|
|
|
events before subscribing to new ones (without ``--catch-up``, you will only
|
|
|
|
hear about events that occur after the tool has connected and subscribed).
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Incidents
|
|
|
|
=========
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
Foolscap keeps a short list of recent events in memory. When something goes
|
|
|
|
wrong, it writes all the history it has (and everything that gets logged in
|
|
|
|
the next few seconds) into a file called an "incident". These files go into
|
2010-12-27 05:05:33 +00:00
|
|
|
``BASEDIR/logs/incidents/`` , in a file named
|
|
|
|
"``incident-TIMESTAMP-UNIQUE.flog.bz2``". The default definition of
|
|
|
|
"something goes wrong" is the generation of a log event at the ``log.WEIRD``
|
|
|
|
level or higher, but other criteria could be implemented.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The typical "incident report" we've seen in a large Tahoe grid is about 40kB
|
|
|
|
compressed, representing about 1800 recent events.
|
|
|
|
|
2011-08-19 04:41:03 +00:00
|
|
|
These "flogfiles" have a similar format to the files saved by "``flogtool
|
|
|
|
tail --save-to``". They are simply lists of log events, with a small header
|
|
|
|
to indicate which event triggered the incident.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
The "``flogtool dump FLOGFILE``" command will take one of these ``.flog.bz2``
|
|
|
|
files and print their contents to stdout, one line per event. The raw event
|
|
|
|
dictionaries can be dumped by using "``flogtool dump --verbose FLOGFILE``".
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2011-08-19 04:41:03 +00:00
|
|
|
The "``flogtool web-viewer``" command can be used to examine the flogfile in
|
|
|
|
a web browser. It runs a small HTTP server and emits the URL on stdout. This
|
|
|
|
view provides more structure than the output of "``flogtool dump``": the
|
|
|
|
parent/child relationships of log events is displayed in a nested format.
|
2010-12-27 05:05:33 +00:00
|
|
|
"``flogtool web-viewer``" is still fairly immature.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Working with flogfiles
|
|
|
|
======================
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
The "``flogtool filter``" command can be used to take a large flogfile
|
|
|
|
(perhaps one created by the log-gatherer, see below) and copy a subset of
|
|
|
|
events into a second file. This smaller flogfile may be easier to work with
|
|
|
|
than the original. The arguments to "``flogtool filter``" specify filtering
|
2011-08-19 04:41:03 +00:00
|
|
|
criteria: a predicate that each event must match to be copied into the target
|
|
|
|
file. ``--before`` and ``--after`` are used to exclude events outside a given
|
|
|
|
window of time. ``--above`` will retain events above a certain severity
|
|
|
|
level. ``--from`` retains events send by a specific tubid.
|
2010-12-27 05:05:33 +00:00
|
|
|
``--strip-facility`` removes events that were emitted with a given facility
|
|
|
|
(like ``foolscap.negotiation`` or ``tahoe.upload``).
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Gatherers
|
|
|
|
=========
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
In a deployed Tahoe grid, it is useful to get log information automatically
|
|
|
|
transferred to a central log-gatherer host. This offloads the (admittedly
|
|
|
|
modest) storage requirements to a different host and provides access to
|
2010-12-27 05:05:33 +00:00
|
|
|
logfiles from multiple nodes (web-API, storage, or helper) in a single place.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2011-08-01 23:24:23 +00:00
|
|
|
There are two kinds of gatherers: "log gatherer" and "stats gatherer". Each
|
2011-08-19 04:41:03 +00:00
|
|
|
produces a FURL which needs to be placed in the ``NODEDIR/tahoe.cfg`` file of
|
|
|
|
each node that is to publish to the gatherer, under the keys
|
2011-08-01 23:24:23 +00:00
|
|
|
"log_gatherer.furl" and "stats_gatherer.furl" respectively. When the Tahoe
|
|
|
|
node starts, it will connect to the configured gatherers and offer its
|
|
|
|
logport: the gatherer will then use the logport to subscribe to hear about
|
|
|
|
events.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The gatherer will write to files in its working directory, which can then be
|
2010-12-27 05:05:33 +00:00
|
|
|
examined with tools like "``flogtool dump``" as described above.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Incident Gatherer
|
|
|
|
-----------------
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The "incident gatherer" only collects Incidents: records of the log events
|
|
|
|
that occurred just before and slightly after some high-level "trigger event"
|
|
|
|
was recorded. Each incident is classified into a "category": a short string
|
|
|
|
that summarizes what sort of problem took place. These classification
|
|
|
|
functions are written after examining a new/unknown incident. The idea is to
|
|
|
|
recognize when the same problem is happening multiple times.
|
|
|
|
|
|
|
|
A collection of classification functions that are useful for Tahoe nodes are
|
2010-12-27 05:05:33 +00:00
|
|
|
provided in ``misc/incident-gatherer/support_classifiers.py`` . There is
|
|
|
|
roughly one category for each ``log.WEIRD``-or-higher level event in the
|
|
|
|
Tahoe source code.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2020-12-08 20:58:42 +00:00
|
|
|
The incident gatherer is created with the "``flogtool create-incident-gatherer
|
|
|
|
WORKDIR``" command, and started with "``tahoe run``". The generated
|
|
|
|
"``gatherer.tac``" file should be modified to add classifier functions.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The incident gatherer writes incident names (which are simply the relative
|
2010-12-27 05:05:33 +00:00
|
|
|
pathname of the ``incident-\*.flog.bz2`` file) into ``classified/CATEGORY``.
|
|
|
|
For example, the ``classified/mutable-retrieve-uncoordinated-write-error``
|
2011-08-19 04:41:03 +00:00
|
|
|
file contains a list of all incidents which were triggered by an
|
|
|
|
uncoordinated write that was detected during mutable file retrieval (caused
|
|
|
|
when somebody changed the contents of the mutable file in between the node's
|
|
|
|
mapupdate step and the retrieve step). The ``classified/unknown`` file
|
|
|
|
contains a list of all incidents that did not match any of the classification
|
|
|
|
functions.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
At startup, the incident gatherer will automatically reclassify any incident
|
2010-12-27 05:05:33 +00:00
|
|
|
report which is not mentioned in any of the ``classified/\*`` files. So the
|
|
|
|
usual workflow is to examine the incidents in ``classified/unknown``, add a
|
|
|
|
new classification function, delete ``classified/unknown``, then bound the
|
|
|
|
gatherer with "``tahoe restart WORKDIR``". The incidents which can be
|
2011-08-19 04:41:03 +00:00
|
|
|
classified with the new functions will be added to their own
|
|
|
|
``classified/FOO`` lists, and the remaining ones will be put in
|
|
|
|
``classified/unknown``, where the process can be repeated until all events
|
|
|
|
are classifiable.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The incident gatherer is still fairly immature: future versions will have a
|
|
|
|
web interface and an RSS feed, so operations personnel can track problems in
|
|
|
|
the storage grid.
|
|
|
|
|
2011-08-19 04:41:03 +00:00
|
|
|
In our experience, each incident takes about two seconds to transfer from the
|
|
|
|
node that generated it to the gatherer. The gatherer will automatically catch
|
|
|
|
up to any incidents which occurred while it is offline.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Log Gatherer
|
|
|
|
------------
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The "Log Gatherer" subscribes to hear about every single event published by
|
|
|
|
the connected nodes, regardless of severity. This server writes these log
|
|
|
|
events into a large flogfile that is rotated (closed, compressed, and
|
|
|
|
replaced with a new one) on a periodic basis. Each flogfile is named
|
|
|
|
according to the range of time it represents, with names like
|
2010-12-27 05:05:33 +00:00
|
|
|
"``from-2008-08-26-132256--to-2008-08-26-162256.flog.bz2``". The flogfiles
|
2008-09-04 00:25:31 +00:00
|
|
|
contain events from many different sources, making it easier to correlate
|
|
|
|
things that happened on multiple machines (such as comparing a client node
|
|
|
|
making a request with the storage servers that respond to that request).
|
|
|
|
|
2011-08-19 04:41:03 +00:00
|
|
|
Create the Log Gatherer with the "``flogtool create-gatherer WORKDIR``"
|
2020-12-08 20:59:14 +00:00
|
|
|
command, and start it with "``twistd -ny gatherer.tac``". Then copy the contents of the
|
2011-08-19 04:41:03 +00:00
|
|
|
``log_gatherer.furl`` file it creates into the ``BASEDIR/tahoe.cfg`` file
|
|
|
|
(under the key ``log_gatherer.furl`` of the section ``[node]``) of all nodes
|
2016-03-30 07:55:21 +00:00
|
|
|
that should be sending it log events. (See :doc:`configuration`)
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
The "``flogtool filter``" command, described above, is useful to cut down the
|
2011-08-01 23:24:23 +00:00
|
|
|
potentially large flogfiles into a more focussed form.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
Busy nodes, particularly web-API nodes which are performing recursive
|
2008-09-04 00:25:31 +00:00
|
|
|
deep-size/deep-stats/deep-check operations, can produce a lot of log events.
|
|
|
|
To avoid overwhelming the node (and using an unbounded amount of memory for
|
|
|
|
the outbound TCP queue), publishing nodes will start dropping log events when
|
|
|
|
the outbound queue grows too large. When this occurs, there will be gaps
|
|
|
|
(non-sequential event numbers) in the log-gatherer's flogfiles.
|
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Adding log messages
|
|
|
|
===================
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
When adding new code, the Tahoe developer should add a reasonable number of
|
|
|
|
new log events. For details, please see the Foolscap logging documentation,
|
|
|
|
but a few notes are worth stating here:
|
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
* use a facility prefix of "``tahoe.``", like "``tahoe.mutable.publish``"
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
* assign each severe (``log.WEIRD`` or higher) event a unique message
|
|
|
|
identifier, as the ``umid=`` argument to the ``log.msg()`` call. The
|
|
|
|
``misc/coding_tools/make_umid`` script may be useful for this purpose.
|
|
|
|
This will make it easier to write a classification function for these
|
|
|
|
messages.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
* use the ``parent=`` argument whenever the event is causally/temporally
|
2010-10-15 05:29:13 +00:00
|
|
|
clustered with its parent. For example, a download process that involves
|
|
|
|
three sequential hash fetches could announce the send and receipt of those
|
2010-12-27 05:05:33 +00:00
|
|
|
hash-fetch messages with a ``parent=`` argument that ties them to the
|
2011-08-19 04:41:03 +00:00
|
|
|
overall download process. However, each new web-API download request should
|
|
|
|
be unparented.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
* use the ``format=`` argument in preference to the ``message=`` argument.
|
|
|
|
E.g. use ``log.msg(format="got %(n)d shares, need %(k)d", n=n, k=k)``
|
|
|
|
instead of ``log.msg("got %d shares, need %d" % (n,k))``. This will allow
|
2011-08-19 04:41:03 +00:00
|
|
|
later tools to analyze the event without needing to scrape/reconstruct the
|
|
|
|
structured data out of the formatted string.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
* Pass extra information as extra keyword arguments, even if they aren't
|
2010-12-27 05:05:33 +00:00
|
|
|
included in the ``format=`` string. This information will be displayed in
|
|
|
|
the "``flogtool dump --verbose``" output, as well as being available to
|
|
|
|
other tools. The ``umid=`` argument should be passed this way.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2011-08-19 04:41:03 +00:00
|
|
|
* use ``log.err`` for the catch-all ``addErrback`` that gets attached to the
|
|
|
|
end of any given Deferred chain. When used in conjunction with
|
2010-12-27 05:05:33 +00:00
|
|
|
``LOGTOTWISTED=1``, ``log.err()`` will tell Twisted about the error-nature
|
|
|
|
of the log message, causing Trial to flunk the test (with an "ERROR"
|
|
|
|
indication that prints a copy of the Failure, including a traceback).
|
|
|
|
Don't use ``log.err`` for events that are ``BAD`` but handled (like hash
|
|
|
|
failures: since these are often deliberately provoked by test code, they
|
|
|
|
should not cause test failures): use ``log.msg(level=BAD)`` for those
|
|
|
|
instead.
|
2008-09-20 17:35:00 +00:00
|
|
|
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
Log Messages During Unit Tests
|
|
|
|
==============================
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2008-09-20 17:35:00 +00:00
|
|
|
If a test is failing and you aren't sure why, start by enabling
|
2010-12-27 05:05:33 +00:00
|
|
|
``FLOGTOTWISTED=1`` like this::
|
2008-09-20 17:35:00 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
make test FLOGTOTWISTED=1
|
2008-09-20 17:35:00 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
With ``FLOGTOTWISTED=1``, sufficiently-important log events will be written
|
|
|
|
into ``_trial_temp/test.log``, which may give you more ideas about why the
|
2012-12-22 23:39:11 +00:00
|
|
|
test is failing.
|
2008-09-20 17:35:00 +00:00
|
|
|
|
2012-12-22 23:39:11 +00:00
|
|
|
By default, ``_trial_temp/test.log`` will not receive messages below the
|
|
|
|
``level=OPERATIONAL`` threshold. You can change the threshold via the ``FLOGLEVEL``
|
|
|
|
variable, e.g.::
|
2008-09-20 17:35:00 +00:00
|
|
|
|
2012-12-22 23:39:11 +00:00
|
|
|
make test FLOGLEVEL=10 FLOGTOTWISTED=1
|
|
|
|
|
|
|
|
(The level numbers are listed in src/allmydata/util/log.py.)
|
|
|
|
|
|
|
|
To look at the detailed foolscap logging messages, run the tests like this::
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-10-15 05:29:13 +00:00
|
|
|
make test FLOGFILE=flog.out.bz2 FLOGLEVEL=1 FLOGTOTWISTED=1
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
The first environment variable will cause foolscap log events to be written
|
2011-08-19 04:41:03 +00:00
|
|
|
to ``./flog.out.bz2`` (instead of merely being recorded in the circular
|
|
|
|
buffers for the use of remote subscribers or incident reports). The second
|
|
|
|
will cause all log events to be written out, not just the higher-severity
|
|
|
|
ones. The third will cause twisted log events (like the markers that indicate
|
|
|
|
when each unit test is starting and stopping) to be copied into the flogfile,
|
|
|
|
making it easier to correlate log events with unit tests.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
|
|
|
Enabling this form of logging appears to roughly double the runtime of the
|
2010-12-27 05:05:33 +00:00
|
|
|
unit tests. The ``flog.out.bz2`` file is approximately 2MB.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
You can then use "``flogtool dump``" or "``flogtool web-viewer``" on the
|
|
|
|
resulting ``flog.out`` file.
|
2008-09-04 00:25:31 +00:00
|
|
|
|
2010-12-27 05:05:33 +00:00
|
|
|
("``flogtool tail``" and the log-gatherer are not useful during unit tests,
|
|
|
|
since there is no single Tub to which all the log messages are published).
|
2010-07-12 05:07:21 +00:00
|
|
|
|
|
|
|
It is possible for setting these environment variables to cause spurious test
|
|
|
|
failures in tests with race condition bugs. All known instances of this have
|
|
|
|
been fixed as of Tahoe-LAFS v1.7.1.
|