2019-03-24 13:14:00 +00:00
|
|
|
from __future__ import print_function
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2016-02-26 19:14:31 +00:00
|
|
|
import json
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
import os
|
|
|
|
import pprint
|
|
|
|
import time
|
|
|
|
from collections import deque
|
|
|
|
|
2020-10-07 19:01:35 +00:00
|
|
|
# Python 2 compatibility
|
2020-10-01 15:57:54 +00:00
|
|
|
from future.utils import PY2
|
|
|
|
if PY2:
|
|
|
|
from future.builtins import str # noqa: F401
|
feat(py3): Convert unicode-only modules to str
Modules that reference `unicode` but do *not* reference `str` can safely be converted to
use `str` in a way that's closest to the way it should be done under Python 3 but that
is still Python 2 compatible [per
`python-future`](https://python-future.org/compatible_idioms.html?highlight=unicode#unicode).
This change results in 4 additional tests passing under Python 3 that weren't before,
one previous test error is now a failure, and more coverage in a few modules. Here's
the diff of the output from running all tests under Python 3 before these changes and
after. I've elided the irrelevant changes (time stamps, object ids, etc.):
```diff
--- .tox/make-test-py3-all-old.log 2020-09-27 20:56:55.761691130 -0700
+++ .tox/make-test-py3-all-new.log 2020-09-27 20:58:16.242075678 -0700
@@ -1,6 +1,6 @@
...
@@ -4218,7 +4218,7 @@
[ERROR]
(#.### secs)
allmydata.test.mutable.test_version.Version.test_download_version ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/test_version.py", line 274, in test_download_version
+ File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/test_version.py", line 279, in test_download_version
d = self.publish_multiple()
File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/util.py", line 372, in publish_multiple
self._nodemaker = make_nodemaker(self._storage)
@@ -4438,40 +4438,26 @@
allmydata.test.test_abbreviate.Abbreviate.test_time ... [OK]
(#.### secs)
allmydata.test.test_auth.AccountFileCheckerKeyTests.test_authenticated ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
+Failure: twisted.cred.error.UnauthorizedLogin:
[ERROR]
(#.### secs)
allmydata.test.test_auth.AccountFileCheckerKeyTests.test_missing_signature ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+ File "/home/rpatterson/src/work/sfu/tahoe-lafs/.tox/py36-coverage/lib/python3.6/site-packages/twisted/trial/_asynctest.py", line 75, in _eb
+ raise self.failureException(output)
+twisted.trial.unittest.FailTest:
+Expected: (<class 'twisted.conch.error.ValidPublicKey'>,)
+Got:
+[Failure instance: Traceback (failure with no frames): <class 'twisted.cred.error.UnauthorizedLogin'>:
+]
+[FAILURE]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_password_auth_user ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_password_auth_user ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unknown_user ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unknown_user ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unrecognized_key ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unrecognized_key ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_wrong_signature ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_wrong_signature ... [OK]
(#.### secs)
allmydata.test.test_backupdb.BackupDB.test_basic ... [OK]
(#.### secs)
@@ -4615,7 +4601,7 @@
src/allmydata/crypto/util.py 12 2 4 2 75% 13, 32, 12->13, 30->32
src/allmydata/deep_stats.py 83 63 26 0 18% 27-52, 56-58, 62-82, 86-91, 94, 97, 103-114, 117-121, 125-131, 135
src/allmydata/dirnode.py 525 420 178 0 15% 70-103, 112-116, 119-135, 140-143, 146-160, 165-173, 176-177, 180-205, 208-217, 223-229, 248-286, 293-299, 302, 310, 315, 318-324, 327-332, 336-340, 344-346, 355-406, 410, 413, 416, 419, 422, 425, 428, 431-433, 436, 439, 442, 445, 448-450, 453, 457, 459, 464, 469-472, 475-478, 481-484, 489-492, 498-501, 504-507, 510-518, 530-532, 539-555, 558-566, 570-589, 600-610, 613-620, 628-641, 646-652, 657-678, 693-714, 752-761, 765-770, 774-812, 819-820, 825, 828, 831, 836-839, 842-849, 852-853, 862-877, 880-881, 884-891, 894, 897-899
-src/allmydata/frontends/auth.py 100 71 28 0 26% 21-22, 30-48, 51, 54-56, 59-70, 80-87, 100-110, 117-118, 121, 124-142, 147-150, 156-159
+src/allmydata/frontends/auth.py 100 52 28 4 47% 21-22, 38, 41-44, 51, 54-56, 65-70, 80-87, 106-108, 117-118, 121, 124-142, 147-150, 156-159, 37->38, 40->41, 59->65, 101->106
src/allmydata/frontends/ftpd.py 255 254 84 0 1% 4-337
src/allmydata/frontends/sftpd.py 1211 1208 488 0 1% 4-2014
src/allmydata/hashtree.py 174 135 72 1 16% 59, 75-78, 106-108, 114-117, 123-126, 132-136, 142-149, 152-162, 165-169, 172, 175, 180, 183, 186, 218-232, 259-262, 295-306, 320-323, 326-331, 384-484, 58->59
@@ -4653,7 +4639,7 @@
src/allmydata/scripts/admin.py 51 31 2 0 38% 9-14, 17-21, 25, 28, 31-37, 40-46, 56-57, 59, 61-66, 74-78
src/allmydata/scripts/backupdb.py 146 91 14 1 36% 84-91, 94-96, 99, 103, 106, 111-114, 117-119, 122, 125, 128, 176-221, 231-242, 245-263, 266-272, 308-324, 327-333, 336-341, 306->308
src/allmydata/scripts/cli.py 259 124 46 6 46% 25-49, 69-72, 79-81, 103, 142-146, 175, 221-222, 258, 265-266, 284-285, 330-331, 338-341, 346-355, 361-362, 366-373, 388, 405, 417, 432, 449, 479-481, 484-486, 489-491, 494-496, 499-501, 504-515, 518-520, 523-525, 528-530, 533, 536-538, 541-543, 546-548, 551-553, 556-558, 561-563, 566-568, 571-573, 576-577, 60->exit, 61->exit, 174->175, 180->exit, 181->exit, 219->221
-src/allmydata/scripts/common.py 153 74 60 4 48% 64, 82, 88, 100, 114-126, 130-152, 159-163, 168-169, 172, 177, 191-236, 240-241, 47->49, 63->64, 79->82, 87->88
+src/allmydata/scripts/common.py 154 74 60 4 49% 69, 87, 93, 105, 119-131, 135-157, 164-168, 173-174, 177, 182, 196-241, 245-246, 52->54, 68->69, 84->87, 92->93
src/allmydata/scripts/common_http.py 77 58 20 0 20% 15-30, 34-36, 38, 42-83, 87, 90, 94-96, 101
src/allmydata/scripts/create_node.py 302 185 114 8 30% 24, 61-96, 99-111, 114-128, 136-139, 169-174, 191-194, 205-208, 224-229, 235, 242, 256-278, 289-292, 295-298, 329, 339, 347-380, 385-445, 448-450, 455-477, 223->224, 234->235, 241->242, 252->256, 288->289, 294->295, 328->329, 338->339
src/allmydata/scripts/debug.py 719 638 202 0 9% 14, 31-32, 35-49, 52-60, 63-142, 146-154, 157-164, 168-217, 220-304, 307-401, 407, 417, 437-465, 468-485, 488-602, 606, 609-611, 637-648, 653-656, 659, 683-689, 692-810, 813-842, 845-848, 851-865, 869, 888, 891-940, 946, 949-950, 957, 960-961, 967-972, 984-985, 999-1000, 1003-1004, 1020-1021, 1025-1031, 1046-1050
@@ -4661,10 +4647,10 @@
src/allmydata/scripts/run_common.py 135 18 24 6 85% 37, 41-46, 59-60, 149, 158, 192-193, 216-220, 226-227, 55->62, 135->exit, 135->exit, 148->149, 191->192, 225->226
src/allmydata/scripts/runner.py 138 53 42 11 56% 84-85, 91, 97-99, 104, 114, 123-132, 140, 146, 149-160, 174-181, 186, 189-190, 204-232, 248, 255, 31->36, 103->104, 113->114, 139->140, 145->146, 147->149, 185->186, 188->189, 202->204, 247->248, 254->255
src/allmydata/scripts/slow_operation.py 69 56 22 0 14% 15-44, 47-52, 55-61, 64-83
-src/allmydata/scripts/stats_gatherer.py 41 25 10 0 31% 20-25, 62-86
+src/allmydata/scripts/stats_gatherer.py 42 25 10 0 33% 25-30, 67-91
src/allmydata/scripts/tahoe_add_alias.py 106 91 30 0 11% 20-32, 35-59, 63-98, 102-111, 115-144
src/allmydata/scripts/tahoe_backup.py 331 267 85 0 15% 20-35, 38-51, 54-58, 71-73, 76-152, 155-157, 160-161, 164-174, 178-209, 212-242, 246-274, 278-279, 287-311, 322-331, 336, 339, 342-351, 356, 359, 362-367, 372-374, 379, 384, 389, 398, 417-425, 428, 431-461, 469-480, 483-486, 500-504, 511-512, 525, 538-542, 545-549, 552-555, 558-561, 564, 571, 578, 586-594
-src/allmydata/scripts/tahoe_check.py 263 235 121 0 7% 15, 20-100, 103-112, 120-129, 132-167, 170-173, 179-192, 195-256, 259-270, 277-323, 327-336, 339
+src/allmydata/scripts/tahoe_check.py 264 235 121 0 8% 20, 25-105, 108-117, 125-134, 137-172, 175-178, 184-197, 200-261, 264-275, 282-328, 332-341, 344
src/allmydata/scripts/tahoe_cp.py 602 503 226 0 12% 22, 26, 30-31, 34-37, 40-41, 44-47, 50-53, 56-60, 63-70, 75-77, 80, 83, 86, 90-91, 94, 98-99, 102, 106-111, 114, 117-134, 138-142, 145-159, 162-172, 175-177, 180, 185-189, 192, 195-197, 200-203, 206, 210-214, 218-223, 230-233, 236, 239-253, 256-263, 266-297, 303, 307-309, 316, 320-323, 326-333, 336-350, 354-358, 361-397, 403-413, 416-433, 436-437, 440-454, 465-496, 504-580, 583, 589-630, 636-689, 693-698, 701-703, 706-719, 723-762, 765-775, 778-806, 810-818, 821-838, 842, 845-857, 862-863, 867
src/allmydata/scripts/tahoe_get.py 37 32 12 0 10% 9-45
src/allmydata/scripts/tahoe_invite.py 59 41 8 0 27% 27-31, 36-71, 76-101
@@ -4679,7 +4665,7 @@
src/allmydata/scripts/tahoe_stop.py 60 47 10 0 19% 16, 24-84
src/allmydata/scripts/tahoe_unlink.py 28 23 6 0 15% 12-40
src/allmydata/scripts/tahoe_webopen.py 27 24 12 0 8% 7-31
-src/allmydata/stats.py 242 156 54 3 33% 28-34, 37-40, 43-47, 50-64, 67-72, 101, 104-110, 113-125, 144-146, 154-155, 160-163, 169-174, 178-187, 191, 200-207, 210, 213-219, 222-228, 232-234, 237, 241, 246-250, 253, 256-257, 263-278, 281-285, 288-293, 299-325, 100->101, 143->144, 153->154
+src/allmydata/stats.py 242 156 54 3 33% 29-35, 38-41, 44-48, 51-65, 68-73, 102, 105-111, 114-126, 145-147, 155-156, 161-164, 170-175, 179-188, 192, 201-208, 211, 214-220, 223-229, 233-235, 238, 242, 247-251, 254, 257-258, 264-279, 282-286, 289-294, 300-326, 101->102, 144->145, 154->155
src/allmydata/storage/common.py 24 2 4 2 86% 11, 28, 10->11, 36->39
src/allmydata/storage/crawler.py 222 125 64 6 37% 16, 90, 111-113, 148-178, 192-193, 231, 244, 251, 275-312, 315-363, 377-384, 393, 416, 428, 445, 453, 488-492, 495-508, 13->16, 89->90, 96->99, 228->231, 248->251, 268->271
src/allmydata/storage/expirer.py 240 183 81 2 21% 9, 74-79, 119, 122, 125-167, 171-233, 236-253, 256-261, 264-266, 269-274, 280-284, 288-322, 388-435, 7->9, 71->74
@@ -4748,7 +4734,7 @@
src/allmydata/windows/fixups.py 133 133 54 0 0% 1-237
src/allmydata/windows/registry.py 42 42 12 0 0% 1-77
------------------------------------------------------------------------------------------------
-TOTAL 27427 20411 8234 294 22%
+TOTAL 27430 20392 8234 298 22%
18 files skipped due to complete coverage.
+ '[' '!' -z 1 ']'
```
Trac: refs #3448, https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3448
2020-09-27 20:00:19 +00:00
|
|
|
|
2008-11-18 08:46:20 +00:00
|
|
|
from twisted.internet import reactor
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
from twisted.application import service
|
|
|
|
from twisted.application.internet import TimerService
|
2017-02-27 17:56:49 +00:00
|
|
|
from zope.interface import implementer
|
2009-05-22 00:38:23 +00:00
|
|
|
from foolscap.api import eventually, DeadReferenceError, Referenceable, Tub
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2016-05-04 22:04:12 +00:00
|
|
|
from allmydata.util import log
|
2015-01-30 00:50:18 +00:00
|
|
|
from allmydata.util.encodingutil import quote_local_unicode_path
|
2008-02-01 04:10:15 +00:00
|
|
|
from allmydata.interfaces import RIStatsProvider, RIStatsGatherer, IStatsProducer
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2017-02-27 17:56:49 +00:00
|
|
|
@implementer(IStatsProducer)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
class LoadMonitor(service.MultiService):
|
2008-02-01 04:10:15 +00:00
|
|
|
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
loop_interval = 1
|
|
|
|
num_samples = 60
|
|
|
|
|
|
|
|
def __init__(self, provider, warn_if_delay_exceeds=1):
|
|
|
|
service.MultiService.__init__(self)
|
|
|
|
self.provider = provider
|
|
|
|
self.warn_if_delay_exceeds = warn_if_delay_exceeds
|
2008-02-02 01:57:31 +00:00
|
|
|
self.started = False
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
self.last = None
|
|
|
|
self.stats = deque()
|
2008-03-04 06:55:58 +00:00
|
|
|
self.timer = None
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
def startService(self):
|
2008-02-02 01:57:31 +00:00
|
|
|
if not self.started:
|
|
|
|
self.started = True
|
2008-03-04 06:55:58 +00:00
|
|
|
self.timer = reactor.callLater(self.loop_interval, self.loop)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
service.MultiService.startService(self)
|
|
|
|
|
|
|
|
def stopService(self):
|
2008-02-02 01:57:31 +00:00
|
|
|
self.started = False
|
2008-03-04 06:55:58 +00:00
|
|
|
if self.timer:
|
|
|
|
self.timer.cancel()
|
|
|
|
self.timer = None
|
|
|
|
return service.MultiService.stopService(self)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
def loop(self):
|
2008-03-04 06:55:58 +00:00
|
|
|
self.timer = None
|
2008-02-02 01:57:31 +00:00
|
|
|
if not self.started:
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
return
|
|
|
|
now = time.time()
|
|
|
|
if self.last is not None:
|
|
|
|
delay = now - self.last - self.loop_interval
|
|
|
|
if delay > self.warn_if_delay_exceeds:
|
|
|
|
log.msg(format='excessive reactor delay (%ss)', args=(delay,),
|
|
|
|
level=log.UNUSUAL)
|
|
|
|
self.stats.append(delay)
|
|
|
|
while len(self.stats) > self.num_samples:
|
|
|
|
self.stats.popleft()
|
|
|
|
|
|
|
|
self.last = now
|
2008-03-04 06:55:58 +00:00
|
|
|
self.timer = reactor.callLater(self.loop_interval, self.loop)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
def get_stats(self):
|
|
|
|
if self.stats:
|
|
|
|
avg = sum(self.stats) / len(self.stats)
|
|
|
|
m_x = max(self.stats)
|
|
|
|
else:
|
|
|
|
avg = m_x = 0
|
|
|
|
return { 'load_monitor.avg_load': avg,
|
|
|
|
'load_monitor.max_load': m_x, }
|
|
|
|
|
2017-02-27 17:56:49 +00:00
|
|
|
@implementer(IStatsProducer)
|
2008-04-30 01:12:53 +00:00
|
|
|
class CPUUsageMonitor(service.MultiService):
|
2008-04-30 18:39:13 +00:00
|
|
|
HISTORY_LENGTH = 15
|
|
|
|
POLL_INTERVAL = 60
|
2008-04-30 01:12:53 +00:00
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
service.MultiService.__init__(self)
|
|
|
|
# we don't use time.clock() here, because the constructor is run by
|
|
|
|
# the twistd parent process (as it loads the .tac file), whereas the
|
|
|
|
# rest of the program will be run by the child process, after twistd
|
|
|
|
# forks. Instead, set self.initial_cpu as soon as the reactor starts
|
|
|
|
# up.
|
|
|
|
self.initial_cpu = 0.0 # just in case
|
|
|
|
eventually(self._set_initial_cpu)
|
|
|
|
self.samples = []
|
|
|
|
# we provide 1min, 5min, and 15min moving averages
|
2008-04-30 18:39:13 +00:00
|
|
|
TimerService(self.POLL_INTERVAL, self.check).setServiceParent(self)
|
2008-04-30 01:12:53 +00:00
|
|
|
|
|
|
|
def _set_initial_cpu(self):
|
|
|
|
self.initial_cpu = time.clock()
|
|
|
|
|
|
|
|
def check(self):
|
|
|
|
now_wall = time.time()
|
|
|
|
now_cpu = time.clock()
|
|
|
|
self.samples.append( (now_wall, now_cpu) )
|
2008-04-30 18:39:13 +00:00
|
|
|
while len(self.samples) > self.HISTORY_LENGTH+1:
|
2008-04-30 01:12:53 +00:00
|
|
|
self.samples.pop(0)
|
|
|
|
|
|
|
|
def _average_N_minutes(self, size):
|
|
|
|
if len(self.samples) < size+1:
|
|
|
|
return None
|
|
|
|
first = -size-1
|
|
|
|
elapsed_wall = self.samples[-1][0] - self.samples[first][0]
|
|
|
|
elapsed_cpu = self.samples[-1][1] - self.samples[first][1]
|
|
|
|
fraction = elapsed_cpu / elapsed_wall
|
|
|
|
return fraction
|
|
|
|
|
|
|
|
def get_stats(self):
|
|
|
|
s = {}
|
|
|
|
avg = self._average_N_minutes(1)
|
|
|
|
if avg is not None:
|
|
|
|
s["cpu_monitor.1min_avg"] = avg
|
|
|
|
avg = self._average_N_minutes(5)
|
|
|
|
if avg is not None:
|
|
|
|
s["cpu_monitor.5min_avg"] = avg
|
|
|
|
avg = self._average_N_minutes(15)
|
|
|
|
if avg is not None:
|
|
|
|
s["cpu_monitor.15min_avg"] = avg
|
|
|
|
now_cpu = time.clock()
|
|
|
|
s["cpu_monitor.total"] = now_cpu - self.initial_cpu
|
|
|
|
return s
|
|
|
|
|
2008-11-18 08:46:20 +00:00
|
|
|
|
2017-02-27 17:56:49 +00:00
|
|
|
@implementer(RIStatsProvider)
|
2009-05-22 00:38:23 +00:00
|
|
|
class StatsProvider(Referenceable, service.MultiService):
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2008-02-01 04:10:15 +00:00
|
|
|
def __init__(self, node, gatherer_furl):
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
service.MultiService.__init__(self)
|
2008-02-01 04:10:15 +00:00
|
|
|
self.node = node
|
2008-05-08 18:37:30 +00:00
|
|
|
self.gatherer_furl = gatherer_furl # might be None
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
self.counters = {}
|
|
|
|
self.stats_producers = []
|
|
|
|
|
2008-05-08 18:37:30 +00:00
|
|
|
# only run the LoadMonitor (which submits a timer every second) if
|
|
|
|
# there is a gatherer who is going to be paying attention. Our stats
|
|
|
|
# are visible through HTTP even without a gatherer, so run the rest
|
|
|
|
# of the stats (including the once-per-minute CPUUsageMonitor)
|
|
|
|
if gatherer_furl:
|
|
|
|
self.load_monitor = LoadMonitor(self)
|
|
|
|
self.load_monitor.setServiceParent(self)
|
|
|
|
self.register_producer(self.load_monitor)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2008-04-30 01:12:53 +00:00
|
|
|
self.cpu_monitor = CPUUsageMonitor()
|
|
|
|
self.cpu_monitor.setServiceParent(self)
|
|
|
|
self.register_producer(self.cpu_monitor)
|
|
|
|
|
2008-02-01 04:10:15 +00:00
|
|
|
def startService(self):
|
2008-05-08 18:37:30 +00:00
|
|
|
if self.node and self.gatherer_furl:
|
2016-04-27 04:54:45 +00:00
|
|
|
nickname_utf8 = self.node.nickname.encode("utf-8")
|
|
|
|
self.node.tub.connectTo(self.gatherer_furl,
|
|
|
|
self._connected, nickname_utf8)
|
2008-02-02 01:57:31 +00:00
|
|
|
service.MultiService.startService(self)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2008-04-11 00:25:44 +00:00
|
|
|
def count(self, name, delta=1):
|
feat(py3): Convert unicode-only modules to str
Modules that reference `unicode` but do *not* reference `str` can safely be converted to
use `str` in a way that's closest to the way it should be done under Python 3 but that
is still Python 2 compatible [per
`python-future`](https://python-future.org/compatible_idioms.html?highlight=unicode#unicode).
This change results in 4 additional tests passing under Python 3 that weren't before,
one previous test error is now a failure, and more coverage in a few modules. Here's
the diff of the output from running all tests under Python 3 before these changes and
after. I've elided the irrelevant changes (time stamps, object ids, etc.):
```diff
--- .tox/make-test-py3-all-old.log 2020-09-27 20:56:55.761691130 -0700
+++ .tox/make-test-py3-all-new.log 2020-09-27 20:58:16.242075678 -0700
@@ -1,6 +1,6 @@
...
@@ -4218,7 +4218,7 @@
[ERROR]
(#.### secs)
allmydata.test.mutable.test_version.Version.test_download_version ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/test_version.py", line 274, in test_download_version
+ File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/test_version.py", line 279, in test_download_version
d = self.publish_multiple()
File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/util.py", line 372, in publish_multiple
self._nodemaker = make_nodemaker(self._storage)
@@ -4438,40 +4438,26 @@
allmydata.test.test_abbreviate.Abbreviate.test_time ... [OK]
(#.### secs)
allmydata.test.test_auth.AccountFileCheckerKeyTests.test_authenticated ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
+Failure: twisted.cred.error.UnauthorizedLogin:
[ERROR]
(#.### secs)
allmydata.test.test_auth.AccountFileCheckerKeyTests.test_missing_signature ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+ File "/home/rpatterson/src/work/sfu/tahoe-lafs/.tox/py36-coverage/lib/python3.6/site-packages/twisted/trial/_asynctest.py", line 75, in _eb
+ raise self.failureException(output)
+twisted.trial.unittest.FailTest:
+Expected: (<class 'twisted.conch.error.ValidPublicKey'>,)
+Got:
+[Failure instance: Traceback (failure with no frames): <class 'twisted.cred.error.UnauthorizedLogin'>:
+]
+[FAILURE]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_password_auth_user ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_password_auth_user ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unknown_user ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unknown_user ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unrecognized_key ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unrecognized_key ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_wrong_signature ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_wrong_signature ... [OK]
(#.### secs)
allmydata.test.test_backupdb.BackupDB.test_basic ... [OK]
(#.### secs)
@@ -4615,7 +4601,7 @@
src/allmydata/crypto/util.py 12 2 4 2 75% 13, 32, 12->13, 30->32
src/allmydata/deep_stats.py 83 63 26 0 18% 27-52, 56-58, 62-82, 86-91, 94, 97, 103-114, 117-121, 125-131, 135
src/allmydata/dirnode.py 525 420 178 0 15% 70-103, 112-116, 119-135, 140-143, 146-160, 165-173, 176-177, 180-205, 208-217, 223-229, 248-286, 293-299, 302, 310, 315, 318-324, 327-332, 336-340, 344-346, 355-406, 410, 413, 416, 419, 422, 425, 428, 431-433, 436, 439, 442, 445, 448-450, 453, 457, 459, 464, 469-472, 475-478, 481-484, 489-492, 498-501, 504-507, 510-518, 530-532, 539-555, 558-566, 570-589, 600-610, 613-620, 628-641, 646-652, 657-678, 693-714, 752-761, 765-770, 774-812, 819-820, 825, 828, 831, 836-839, 842-849, 852-853, 862-877, 880-881, 884-891, 894, 897-899
-src/allmydata/frontends/auth.py 100 71 28 0 26% 21-22, 30-48, 51, 54-56, 59-70, 80-87, 100-110, 117-118, 121, 124-142, 147-150, 156-159
+src/allmydata/frontends/auth.py 100 52 28 4 47% 21-22, 38, 41-44, 51, 54-56, 65-70, 80-87, 106-108, 117-118, 121, 124-142, 147-150, 156-159, 37->38, 40->41, 59->65, 101->106
src/allmydata/frontends/ftpd.py 255 254 84 0 1% 4-337
src/allmydata/frontends/sftpd.py 1211 1208 488 0 1% 4-2014
src/allmydata/hashtree.py 174 135 72 1 16% 59, 75-78, 106-108, 114-117, 123-126, 132-136, 142-149, 152-162, 165-169, 172, 175, 180, 183, 186, 218-232, 259-262, 295-306, 320-323, 326-331, 384-484, 58->59
@@ -4653,7 +4639,7 @@
src/allmydata/scripts/admin.py 51 31 2 0 38% 9-14, 17-21, 25, 28, 31-37, 40-46, 56-57, 59, 61-66, 74-78
src/allmydata/scripts/backupdb.py 146 91 14 1 36% 84-91, 94-96, 99, 103, 106, 111-114, 117-119, 122, 125, 128, 176-221, 231-242, 245-263, 266-272, 308-324, 327-333, 336-341, 306->308
src/allmydata/scripts/cli.py 259 124 46 6 46% 25-49, 69-72, 79-81, 103, 142-146, 175, 221-222, 258, 265-266, 284-285, 330-331, 338-341, 346-355, 361-362, 366-373, 388, 405, 417, 432, 449, 479-481, 484-486, 489-491, 494-496, 499-501, 504-515, 518-520, 523-525, 528-530, 533, 536-538, 541-543, 546-548, 551-553, 556-558, 561-563, 566-568, 571-573, 576-577, 60->exit, 61->exit, 174->175, 180->exit, 181->exit, 219->221
-src/allmydata/scripts/common.py 153 74 60 4 48% 64, 82, 88, 100, 114-126, 130-152, 159-163, 168-169, 172, 177, 191-236, 240-241, 47->49, 63->64, 79->82, 87->88
+src/allmydata/scripts/common.py 154 74 60 4 49% 69, 87, 93, 105, 119-131, 135-157, 164-168, 173-174, 177, 182, 196-241, 245-246, 52->54, 68->69, 84->87, 92->93
src/allmydata/scripts/common_http.py 77 58 20 0 20% 15-30, 34-36, 38, 42-83, 87, 90, 94-96, 101
src/allmydata/scripts/create_node.py 302 185 114 8 30% 24, 61-96, 99-111, 114-128, 136-139, 169-174, 191-194, 205-208, 224-229, 235, 242, 256-278, 289-292, 295-298, 329, 339, 347-380, 385-445, 448-450, 455-477, 223->224, 234->235, 241->242, 252->256, 288->289, 294->295, 328->329, 338->339
src/allmydata/scripts/debug.py 719 638 202 0 9% 14, 31-32, 35-49, 52-60, 63-142, 146-154, 157-164, 168-217, 220-304, 307-401, 407, 417, 437-465, 468-485, 488-602, 606, 609-611, 637-648, 653-656, 659, 683-689, 692-810, 813-842, 845-848, 851-865, 869, 888, 891-940, 946, 949-950, 957, 960-961, 967-972, 984-985, 999-1000, 1003-1004, 1020-1021, 1025-1031, 1046-1050
@@ -4661,10 +4647,10 @@
src/allmydata/scripts/run_common.py 135 18 24 6 85% 37, 41-46, 59-60, 149, 158, 192-193, 216-220, 226-227, 55->62, 135->exit, 135->exit, 148->149, 191->192, 225->226
src/allmydata/scripts/runner.py 138 53 42 11 56% 84-85, 91, 97-99, 104, 114, 123-132, 140, 146, 149-160, 174-181, 186, 189-190, 204-232, 248, 255, 31->36, 103->104, 113->114, 139->140, 145->146, 147->149, 185->186, 188->189, 202->204, 247->248, 254->255
src/allmydata/scripts/slow_operation.py 69 56 22 0 14% 15-44, 47-52, 55-61, 64-83
-src/allmydata/scripts/stats_gatherer.py 41 25 10 0 31% 20-25, 62-86
+src/allmydata/scripts/stats_gatherer.py 42 25 10 0 33% 25-30, 67-91
src/allmydata/scripts/tahoe_add_alias.py 106 91 30 0 11% 20-32, 35-59, 63-98, 102-111, 115-144
src/allmydata/scripts/tahoe_backup.py 331 267 85 0 15% 20-35, 38-51, 54-58, 71-73, 76-152, 155-157, 160-161, 164-174, 178-209, 212-242, 246-274, 278-279, 287-311, 322-331, 336, 339, 342-351, 356, 359, 362-367, 372-374, 379, 384, 389, 398, 417-425, 428, 431-461, 469-480, 483-486, 500-504, 511-512, 525, 538-542, 545-549, 552-555, 558-561, 564, 571, 578, 586-594
-src/allmydata/scripts/tahoe_check.py 263 235 121 0 7% 15, 20-100, 103-112, 120-129, 132-167, 170-173, 179-192, 195-256, 259-270, 277-323, 327-336, 339
+src/allmydata/scripts/tahoe_check.py 264 235 121 0 8% 20, 25-105, 108-117, 125-134, 137-172, 175-178, 184-197, 200-261, 264-275, 282-328, 332-341, 344
src/allmydata/scripts/tahoe_cp.py 602 503 226 0 12% 22, 26, 30-31, 34-37, 40-41, 44-47, 50-53, 56-60, 63-70, 75-77, 80, 83, 86, 90-91, 94, 98-99, 102, 106-111, 114, 117-134, 138-142, 145-159, 162-172, 175-177, 180, 185-189, 192, 195-197, 200-203, 206, 210-214, 218-223, 230-233, 236, 239-253, 256-263, 266-297, 303, 307-309, 316, 320-323, 326-333, 336-350, 354-358, 361-397, 403-413, 416-433, 436-437, 440-454, 465-496, 504-580, 583, 589-630, 636-689, 693-698, 701-703, 706-719, 723-762, 765-775, 778-806, 810-818, 821-838, 842, 845-857, 862-863, 867
src/allmydata/scripts/tahoe_get.py 37 32 12 0 10% 9-45
src/allmydata/scripts/tahoe_invite.py 59 41 8 0 27% 27-31, 36-71, 76-101
@@ -4679,7 +4665,7 @@
src/allmydata/scripts/tahoe_stop.py 60 47 10 0 19% 16, 24-84
src/allmydata/scripts/tahoe_unlink.py 28 23 6 0 15% 12-40
src/allmydata/scripts/tahoe_webopen.py 27 24 12 0 8% 7-31
-src/allmydata/stats.py 242 156 54 3 33% 28-34, 37-40, 43-47, 50-64, 67-72, 101, 104-110, 113-125, 144-146, 154-155, 160-163, 169-174, 178-187, 191, 200-207, 210, 213-219, 222-228, 232-234, 237, 241, 246-250, 253, 256-257, 263-278, 281-285, 288-293, 299-325, 100->101, 143->144, 153->154
+src/allmydata/stats.py 242 156 54 3 33% 29-35, 38-41, 44-48, 51-65, 68-73, 102, 105-111, 114-126, 145-147, 155-156, 161-164, 170-175, 179-188, 192, 201-208, 211, 214-220, 223-229, 233-235, 238, 242, 247-251, 254, 257-258, 264-279, 282-286, 289-294, 300-326, 101->102, 144->145, 154->155
src/allmydata/storage/common.py 24 2 4 2 86% 11, 28, 10->11, 36->39
src/allmydata/storage/crawler.py 222 125 64 6 37% 16, 90, 111-113, 148-178, 192-193, 231, 244, 251, 275-312, 315-363, 377-384, 393, 416, 428, 445, 453, 488-492, 495-508, 13->16, 89->90, 96->99, 228->231, 248->251, 268->271
src/allmydata/storage/expirer.py 240 183 81 2 21% 9, 74-79, 119, 122, 125-167, 171-233, 236-253, 256-261, 264-266, 269-274, 280-284, 288-322, 388-435, 7->9, 71->74
@@ -4748,7 +4734,7 @@
src/allmydata/windows/fixups.py 133 133 54 0 0% 1-237
src/allmydata/windows/registry.py 42 42 12 0 0% 1-77
------------------------------------------------------------------------------------------------
-TOTAL 27427 20411 8234 294 22%
+TOTAL 27430 20392 8234 298 22%
18 files skipped due to complete coverage.
+ '[' '!' -z 1 ']'
```
Trac: refs #3448, https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3448
2020-09-27 20:00:19 +00:00
|
|
|
if isinstance(name, str):
|
2020-09-14 17:29:28 +00:00
|
|
|
name = name.encode("utf-8")
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
val = self.counters.setdefault(name, 0)
|
|
|
|
self.counters[name] = val + delta
|
|
|
|
|
|
|
|
def register_producer(self, stats_producer):
|
2008-02-01 04:10:15 +00:00
|
|
|
self.stats_producers.append(IStatsProducer(stats_producer))
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2008-04-14 21:17:08 +00:00
|
|
|
def get_stats(self):
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
stats = {}
|
|
|
|
for sp in self.stats_producers:
|
|
|
|
stats.update(sp.get_stats())
|
2008-04-09 23:10:53 +00:00
|
|
|
ret = { 'counters': self.counters, 'stats': stats }
|
2008-04-14 21:17:08 +00:00
|
|
|
log.msg(format='get_stats() -> %(stats)s', stats=ret, level=log.NOISY)
|
2008-04-09 23:10:53 +00:00
|
|
|
return ret
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2008-04-14 21:17:08 +00:00
|
|
|
def remote_get_stats(self):
|
2020-09-14 17:29:28 +00:00
|
|
|
# The remote API expects keys to be bytes:
|
|
|
|
def to_bytes(d):
|
|
|
|
result = {}
|
|
|
|
for (k, v) in d.items():
|
feat(py3): Convert unicode-only modules to str
Modules that reference `unicode` but do *not* reference `str` can safely be converted to
use `str` in a way that's closest to the way it should be done under Python 3 but that
is still Python 2 compatible [per
`python-future`](https://python-future.org/compatible_idioms.html?highlight=unicode#unicode).
This change results in 4 additional tests passing under Python 3 that weren't before,
one previous test error is now a failure, and more coverage in a few modules. Here's
the diff of the output from running all tests under Python 3 before these changes and
after. I've elided the irrelevant changes (time stamps, object ids, etc.):
```diff
--- .tox/make-test-py3-all-old.log 2020-09-27 20:56:55.761691130 -0700
+++ .tox/make-test-py3-all-new.log 2020-09-27 20:58:16.242075678 -0700
@@ -1,6 +1,6 @@
...
@@ -4218,7 +4218,7 @@
[ERROR]
(#.### secs)
allmydata.test.mutable.test_version.Version.test_download_version ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/test_version.py", line 274, in test_download_version
+ File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/test_version.py", line 279, in test_download_version
d = self.publish_multiple()
File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/mutable/util.py", line 372, in publish_multiple
self._nodemaker = make_nodemaker(self._storage)
@@ -4438,40 +4438,26 @@
allmydata.test.test_abbreviate.Abbreviate.test_time ... [OK]
(#.### secs)
allmydata.test.test_auth.AccountFileCheckerKeyTests.test_authenticated ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
+Failure: twisted.cred.error.UnauthorizedLogin:
[ERROR]
(#.### secs)
allmydata.test.test_auth.AccountFileCheckerKeyTests.test_missing_signature ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+ File "/home/rpatterson/src/work/sfu/tahoe-lafs/.tox/py36-coverage/lib/python3.6/site-packages/twisted/trial/_asynctest.py", line 75, in _eb
+ raise self.failureException(output)
+twisted.trial.unittest.FailTest:
+Expected: (<class 'twisted.conch.error.ValidPublicKey'>,)
+Got:
+[Failure instance: Traceback (failure with no frames): <class 'twisted.cred.error.UnauthorizedLogin'>:
+]
+[FAILURE]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_password_auth_user ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_password_auth_user ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unknown_user ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unknown_user ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unrecognized_key ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_unrecognized_key ... [OK]
(#.### secs)
-allmydata.test.test_auth.AccountFileCheckerKeyTests.test_wrong_signature ... Traceback (most recent call last):
- File "/home/rpatterson/src/work/sfu/tahoe-lafs/src/allmydata/test/test_auth.py", line 42, in setUp
- abspath = abspath_expanduser_unicode(unicode(self.account_file.path))
-builtins.NameError: name 'unicode' is not defined
-[ERROR]
+allmydata.test.test_auth.AccountFileCheckerKeyTests.test_wrong_signature ... [OK]
(#.### secs)
allmydata.test.test_backupdb.BackupDB.test_basic ... [OK]
(#.### secs)
@@ -4615,7 +4601,7 @@
src/allmydata/crypto/util.py 12 2 4 2 75% 13, 32, 12->13, 30->32
src/allmydata/deep_stats.py 83 63 26 0 18% 27-52, 56-58, 62-82, 86-91, 94, 97, 103-114, 117-121, 125-131, 135
src/allmydata/dirnode.py 525 420 178 0 15% 70-103, 112-116, 119-135, 140-143, 146-160, 165-173, 176-177, 180-205, 208-217, 223-229, 248-286, 293-299, 302, 310, 315, 318-324, 327-332, 336-340, 344-346, 355-406, 410, 413, 416, 419, 422, 425, 428, 431-433, 436, 439, 442, 445, 448-450, 453, 457, 459, 464, 469-472, 475-478, 481-484, 489-492, 498-501, 504-507, 510-518, 530-532, 539-555, 558-566, 570-589, 600-610, 613-620, 628-641, 646-652, 657-678, 693-714, 752-761, 765-770, 774-812, 819-820, 825, 828, 831, 836-839, 842-849, 852-853, 862-877, 880-881, 884-891, 894, 897-899
-src/allmydata/frontends/auth.py 100 71 28 0 26% 21-22, 30-48, 51, 54-56, 59-70, 80-87, 100-110, 117-118, 121, 124-142, 147-150, 156-159
+src/allmydata/frontends/auth.py 100 52 28 4 47% 21-22, 38, 41-44, 51, 54-56, 65-70, 80-87, 106-108, 117-118, 121, 124-142, 147-150, 156-159, 37->38, 40->41, 59->65, 101->106
src/allmydata/frontends/ftpd.py 255 254 84 0 1% 4-337
src/allmydata/frontends/sftpd.py 1211 1208 488 0 1% 4-2014
src/allmydata/hashtree.py 174 135 72 1 16% 59, 75-78, 106-108, 114-117, 123-126, 132-136, 142-149, 152-162, 165-169, 172, 175, 180, 183, 186, 218-232, 259-262, 295-306, 320-323, 326-331, 384-484, 58->59
@@ -4653,7 +4639,7 @@
src/allmydata/scripts/admin.py 51 31 2 0 38% 9-14, 17-21, 25, 28, 31-37, 40-46, 56-57, 59, 61-66, 74-78
src/allmydata/scripts/backupdb.py 146 91 14 1 36% 84-91, 94-96, 99, 103, 106, 111-114, 117-119, 122, 125, 128, 176-221, 231-242, 245-263, 266-272, 308-324, 327-333, 336-341, 306->308
src/allmydata/scripts/cli.py 259 124 46 6 46% 25-49, 69-72, 79-81, 103, 142-146, 175, 221-222, 258, 265-266, 284-285, 330-331, 338-341, 346-355, 361-362, 366-373, 388, 405, 417, 432, 449, 479-481, 484-486, 489-491, 494-496, 499-501, 504-515, 518-520, 523-525, 528-530, 533, 536-538, 541-543, 546-548, 551-553, 556-558, 561-563, 566-568, 571-573, 576-577, 60->exit, 61->exit, 174->175, 180->exit, 181->exit, 219->221
-src/allmydata/scripts/common.py 153 74 60 4 48% 64, 82, 88, 100, 114-126, 130-152, 159-163, 168-169, 172, 177, 191-236, 240-241, 47->49, 63->64, 79->82, 87->88
+src/allmydata/scripts/common.py 154 74 60 4 49% 69, 87, 93, 105, 119-131, 135-157, 164-168, 173-174, 177, 182, 196-241, 245-246, 52->54, 68->69, 84->87, 92->93
src/allmydata/scripts/common_http.py 77 58 20 0 20% 15-30, 34-36, 38, 42-83, 87, 90, 94-96, 101
src/allmydata/scripts/create_node.py 302 185 114 8 30% 24, 61-96, 99-111, 114-128, 136-139, 169-174, 191-194, 205-208, 224-229, 235, 242, 256-278, 289-292, 295-298, 329, 339, 347-380, 385-445, 448-450, 455-477, 223->224, 234->235, 241->242, 252->256, 288->289, 294->295, 328->329, 338->339
src/allmydata/scripts/debug.py 719 638 202 0 9% 14, 31-32, 35-49, 52-60, 63-142, 146-154, 157-164, 168-217, 220-304, 307-401, 407, 417, 437-465, 468-485, 488-602, 606, 609-611, 637-648, 653-656, 659, 683-689, 692-810, 813-842, 845-848, 851-865, 869, 888, 891-940, 946, 949-950, 957, 960-961, 967-972, 984-985, 999-1000, 1003-1004, 1020-1021, 1025-1031, 1046-1050
@@ -4661,10 +4647,10 @@
src/allmydata/scripts/run_common.py 135 18 24 6 85% 37, 41-46, 59-60, 149, 158, 192-193, 216-220, 226-227, 55->62, 135->exit, 135->exit, 148->149, 191->192, 225->226
src/allmydata/scripts/runner.py 138 53 42 11 56% 84-85, 91, 97-99, 104, 114, 123-132, 140, 146, 149-160, 174-181, 186, 189-190, 204-232, 248, 255, 31->36, 103->104, 113->114, 139->140, 145->146, 147->149, 185->186, 188->189, 202->204, 247->248, 254->255
src/allmydata/scripts/slow_operation.py 69 56 22 0 14% 15-44, 47-52, 55-61, 64-83
-src/allmydata/scripts/stats_gatherer.py 41 25 10 0 31% 20-25, 62-86
+src/allmydata/scripts/stats_gatherer.py 42 25 10 0 33% 25-30, 67-91
src/allmydata/scripts/tahoe_add_alias.py 106 91 30 0 11% 20-32, 35-59, 63-98, 102-111, 115-144
src/allmydata/scripts/tahoe_backup.py 331 267 85 0 15% 20-35, 38-51, 54-58, 71-73, 76-152, 155-157, 160-161, 164-174, 178-209, 212-242, 246-274, 278-279, 287-311, 322-331, 336, 339, 342-351, 356, 359, 362-367, 372-374, 379, 384, 389, 398, 417-425, 428, 431-461, 469-480, 483-486, 500-504, 511-512, 525, 538-542, 545-549, 552-555, 558-561, 564, 571, 578, 586-594
-src/allmydata/scripts/tahoe_check.py 263 235 121 0 7% 15, 20-100, 103-112, 120-129, 132-167, 170-173, 179-192, 195-256, 259-270, 277-323, 327-336, 339
+src/allmydata/scripts/tahoe_check.py 264 235 121 0 8% 20, 25-105, 108-117, 125-134, 137-172, 175-178, 184-197, 200-261, 264-275, 282-328, 332-341, 344
src/allmydata/scripts/tahoe_cp.py 602 503 226 0 12% 22, 26, 30-31, 34-37, 40-41, 44-47, 50-53, 56-60, 63-70, 75-77, 80, 83, 86, 90-91, 94, 98-99, 102, 106-111, 114, 117-134, 138-142, 145-159, 162-172, 175-177, 180, 185-189, 192, 195-197, 200-203, 206, 210-214, 218-223, 230-233, 236, 239-253, 256-263, 266-297, 303, 307-309, 316, 320-323, 326-333, 336-350, 354-358, 361-397, 403-413, 416-433, 436-437, 440-454, 465-496, 504-580, 583, 589-630, 636-689, 693-698, 701-703, 706-719, 723-762, 765-775, 778-806, 810-818, 821-838, 842, 845-857, 862-863, 867
src/allmydata/scripts/tahoe_get.py 37 32 12 0 10% 9-45
src/allmydata/scripts/tahoe_invite.py 59 41 8 0 27% 27-31, 36-71, 76-101
@@ -4679,7 +4665,7 @@
src/allmydata/scripts/tahoe_stop.py 60 47 10 0 19% 16, 24-84
src/allmydata/scripts/tahoe_unlink.py 28 23 6 0 15% 12-40
src/allmydata/scripts/tahoe_webopen.py 27 24 12 0 8% 7-31
-src/allmydata/stats.py 242 156 54 3 33% 28-34, 37-40, 43-47, 50-64, 67-72, 101, 104-110, 113-125, 144-146, 154-155, 160-163, 169-174, 178-187, 191, 200-207, 210, 213-219, 222-228, 232-234, 237, 241, 246-250, 253, 256-257, 263-278, 281-285, 288-293, 299-325, 100->101, 143->144, 153->154
+src/allmydata/stats.py 242 156 54 3 33% 29-35, 38-41, 44-48, 51-65, 68-73, 102, 105-111, 114-126, 145-147, 155-156, 161-164, 170-175, 179-188, 192, 201-208, 211, 214-220, 223-229, 233-235, 238, 242, 247-251, 254, 257-258, 264-279, 282-286, 289-294, 300-326, 101->102, 144->145, 154->155
src/allmydata/storage/common.py 24 2 4 2 86% 11, 28, 10->11, 36->39
src/allmydata/storage/crawler.py 222 125 64 6 37% 16, 90, 111-113, 148-178, 192-193, 231, 244, 251, 275-312, 315-363, 377-384, 393, 416, 428, 445, 453, 488-492, 495-508, 13->16, 89->90, 96->99, 228->231, 248->251, 268->271
src/allmydata/storage/expirer.py 240 183 81 2 21% 9, 74-79, 119, 122, 125-167, 171-233, 236-253, 256-261, 264-266, 269-274, 280-284, 288-322, 388-435, 7->9, 71->74
@@ -4748,7 +4734,7 @@
src/allmydata/windows/fixups.py 133 133 54 0 0% 1-237
src/allmydata/windows/registry.py 42 42 12 0 0% 1-77
------------------------------------------------------------------------------------------------
-TOTAL 27427 20411 8234 294 22%
+TOTAL 27430 20392 8234 298 22%
18 files skipped due to complete coverage.
+ '[' '!' -z 1 ']'
```
Trac: refs #3448, https://tahoe-lafs.org/trac/tahoe-lafs/ticket/3448
2020-09-27 20:00:19 +00:00
|
|
|
if isinstance(k, str):
|
2020-09-14 17:29:28 +00:00
|
|
|
k = k.encode("utf-8")
|
|
|
|
result[k] = v
|
|
|
|
return result
|
|
|
|
|
|
|
|
stats = self.get_stats()
|
|
|
|
return {b"counters": to_bytes(stats["counters"]),
|
|
|
|
b"stats": to_bytes(stats["stats"])}
|
2008-04-14 21:17:08 +00:00
|
|
|
|
2008-02-01 04:10:15 +00:00
|
|
|
def _connected(self, gatherer, nickname):
|
2008-03-04 06:55:58 +00:00
|
|
|
gatherer.callRemoteOnly('provide', self, nickname or '')
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2008-11-18 08:46:20 +00:00
|
|
|
|
2017-02-27 17:56:49 +00:00
|
|
|
@implementer(RIStatsGatherer)
|
2009-05-22 00:38:23 +00:00
|
|
|
class StatsGatherer(Referenceable, service.MultiService):
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
poll_interval = 60
|
|
|
|
|
2008-11-18 08:46:20 +00:00
|
|
|
def __init__(self, basedir):
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
service.MultiService.__init__(self)
|
2008-03-04 06:55:58 +00:00
|
|
|
self.basedir = basedir
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
self.clients = {}
|
|
|
|
self.nicknames = {}
|
|
|
|
|
|
|
|
self.timer = TimerService(self.poll_interval, self.poll)
|
|
|
|
self.timer.setServiceParent(self)
|
|
|
|
|
|
|
|
def get_tubid(self, rref):
|
2009-05-22 00:38:23 +00:00
|
|
|
return rref.getRemoteTubID()
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
def remote_provide(self, provider, nickname):
|
|
|
|
tubid = self.get_tubid(provider)
|
2008-02-01 02:11:31 +00:00
|
|
|
if tubid == '<unauth>':
|
2019-03-24 13:14:00 +00:00
|
|
|
print("WARNING: failed to get tubid for %s (%s)" % (provider, nickname))
|
2008-02-01 02:11:31 +00:00
|
|
|
# don't add to clients to poll (polluting data) don't care about disconnect
|
|
|
|
return
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
self.clients[tubid] = provider
|
|
|
|
self.nicknames[tubid] = nickname
|
|
|
|
|
|
|
|
def poll(self):
|
|
|
|
for tubid,client in self.clients.items():
|
|
|
|
nickname = self.nicknames.get(tubid)
|
|
|
|
d = client.callRemote('get_stats')
|
2008-03-04 06:55:58 +00:00
|
|
|
d.addCallbacks(self.got_stats, self.lost_client,
|
|
|
|
callbackArgs=(tubid, nickname),
|
|
|
|
errbackArgs=(tubid,))
|
|
|
|
d.addErrback(self.log_client_error, tubid)
|
|
|
|
|
|
|
|
def lost_client(self, f, tubid):
|
|
|
|
# this is called lazily, when a get_stats request fails
|
|
|
|
del self.clients[tubid]
|
|
|
|
del self.nicknames[tubid]
|
2010-01-12 00:07:23 +00:00
|
|
|
f.trap(DeadReferenceError)
|
2008-03-04 06:55:58 +00:00
|
|
|
|
|
|
|
def log_client_error(self, f, tubid):
|
|
|
|
log.msg("StatsGatherer: error in get_stats(), peerid=%s" % tubid,
|
|
|
|
level=log.UNUSUAL, failure=f)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
def got_stats(self, stats, tubid, nickname):
|
|
|
|
raise NotImplementedError()
|
|
|
|
|
|
|
|
class StdOutStatsGatherer(StatsGatherer):
|
2008-03-04 06:55:58 +00:00
|
|
|
verbose = True
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
def remote_provide(self, provider, nickname):
|
|
|
|
tubid = self.get_tubid(provider)
|
2008-03-04 06:55:58 +00:00
|
|
|
if self.verbose:
|
2019-03-24 13:14:00 +00:00
|
|
|
print('connect "%s" [%s]' % (nickname, tubid))
|
2008-03-04 06:55:58 +00:00
|
|
|
provider.notifyOnDisconnect(self.announce_lost_client, tubid)
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
StatsGatherer.remote_provide(self, provider, nickname)
|
|
|
|
|
2008-03-04 06:55:58 +00:00
|
|
|
def announce_lost_client(self, tubid):
|
2019-03-24 13:14:00 +00:00
|
|
|
print('disconnect "%s" [%s]' % (self.nicknames[tubid], tubid))
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
|
|
|
def got_stats(self, stats, tubid, nickname):
|
2019-03-24 13:14:00 +00:00
|
|
|
print('"%s" [%s]:' % (nickname, tubid))
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
pprint.pprint(stats)
|
|
|
|
|
2016-04-28 00:22:51 +00:00
|
|
|
class JSONStatsGatherer(StdOutStatsGatherer):
|
2008-03-04 06:55:58 +00:00
|
|
|
# inherit from StdOutStatsGatherer for connect/disconnect notifications
|
|
|
|
|
2015-01-30 00:50:18 +00:00
|
|
|
def __init__(self, basedir=u".", verbose=True):
|
2008-03-04 06:55:58 +00:00
|
|
|
self.verbose = verbose
|
2008-11-18 08:46:20 +00:00
|
|
|
StatsGatherer.__init__(self, basedir)
|
2016-02-26 19:14:31 +00:00
|
|
|
self.jsonfile = os.path.join(basedir, "stats.json")
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2016-04-28 00:22:51 +00:00
|
|
|
if os.path.exists(self.jsonfile):
|
2012-07-02 18:15:55 +00:00
|
|
|
try:
|
2019-12-21 07:03:38 +00:00
|
|
|
with open(self.jsonfile, 'rb') as f:
|
|
|
|
self.gathered_stats = json.load(f)
|
2012-07-02 18:15:55 +00:00
|
|
|
except Exception:
|
2019-03-24 13:14:00 +00:00
|
|
|
print("Error while attempting to load stats file %s.\n"
|
|
|
|
"You may need to restore this file from a backup,"
|
|
|
|
" or delete it if no backup is available.\n" %
|
|
|
|
quote_local_unicode_path(self.jsonfile))
|
2012-07-02 18:15:55 +00:00
|
|
|
raise
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
else:
|
|
|
|
self.gathered_stats = {}
|
|
|
|
|
|
|
|
def got_stats(self, stats, tubid, nickname):
|
|
|
|
s = self.gathered_stats.setdefault(tubid, {})
|
|
|
|
s['timestamp'] = time.time()
|
|
|
|
s['nickname'] = nickname
|
|
|
|
s['stats'] = stats
|
2016-02-26 19:14:31 +00:00
|
|
|
self.dump_json()
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
|
2016-02-26 19:14:31 +00:00
|
|
|
def dump_json(self):
|
|
|
|
tmp = "%s.tmp" % (self.jsonfile,)
|
2019-12-21 07:03:38 +00:00
|
|
|
with open(tmp, 'wb') as f:
|
|
|
|
json.dump(self.gathered_stats, f)
|
2016-02-26 19:14:31 +00:00
|
|
|
if os.path.exists(self.jsonfile):
|
|
|
|
os.unlink(self.jsonfile)
|
|
|
|
os.rename(tmp, self.jsonfile)
|
|
|
|
|
2008-11-18 08:46:20 +00:00
|
|
|
class StatsGathererService(service.MultiService):
|
|
|
|
furl_file = "stats_gatherer.furl"
|
|
|
|
|
|
|
|
def __init__(self, basedir=".", verbose=False):
|
|
|
|
service.MultiService.__init__(self)
|
|
|
|
self.basedir = basedir
|
2009-05-22 00:38:23 +00:00
|
|
|
self.tub = Tub(certFile=os.path.join(self.basedir,
|
|
|
|
"stats_gatherer.pem"))
|
2008-11-18 08:46:20 +00:00
|
|
|
self.tub.setServiceParent(self)
|
|
|
|
self.tub.setOption("logLocalFailures", True)
|
|
|
|
self.tub.setOption("logRemoteFailures", True)
|
2009-05-22 00:46:32 +00:00
|
|
|
self.tub.setOption("expose-remote-exception-types", False)
|
2008-11-18 08:46:20 +00:00
|
|
|
|
2016-04-28 00:22:51 +00:00
|
|
|
self.stats_gatherer = JSONStatsGatherer(self.basedir, verbose)
|
2008-11-18 08:46:20 +00:00
|
|
|
self.stats_gatherer.setServiceParent(self)
|
|
|
|
|
stats: add a simple stats gathering system
We have a desire to collect runtime statistics from multiple nodes primarily
for server monitoring purposes. This implements a simple implementation of
such a system, as a skeleton to build more sophistication upon.
Each client now looks for a 'stats_gatherer.furl' config file. If it has
been configured to use a stats gatherer, then it instantiates internally
a StatsProvider. This is a central place for code which wishes to offer
stats up for monitoring to report them to, either by calling
stats_provider.count('stat.name', value) to increment a counter, or by
registering a class as a stats producer with sp.register_producer(obj).
The StatsProvider connects to the StatsGatherer server and provides its
provider upon startup. The StatsGatherer is then responsible for polling
the attached providers periodically to retrieve the data provided.
The provider queries each registered producer when the gatherer queries
the provider. Both the internal 'counters' and the queried 'stats' are
then reported to the gatherer.
This provides a simple gatherer app, (c.f. make stats-gatherer-run)
which prints its furl and listens for incoming connections. Once a
minute, the gatherer polls all connected providers, and writes the
retrieved data into a pickle file.
Also included is a munin plugin which knows how to read the gatherer's
stats.pickle and output data munin can interpret. this plugin,
tahoe-stats.py can be symlinked as multiple different names within
munin's 'plugins' directory, and inspects argv to determine which
data to display, doing a lookup in a table within that file.
It looks in the environment for 'statsfile' to determine the path to
the gatherer's stats.pickle. An example plugins-conf.d file is
provided.
2008-01-31 03:11:07 +00:00
|
|
|
try:
|
2016-05-04 22:04:12 +00:00
|
|
|
with open(os.path.join(self.basedir, "location")) as f:
|
|
|
|
location = f.read().strip()
|
2008-11-18 08:46:20 +00:00
|
|
|
except EnvironmentError:
|
2016-05-04 22:04:12 +00:00
|
|
|
raise ValueError("Unable to find 'location' in BASEDIR, please rebuild your stats-gatherer")
|
|
|
|
try:
|
|
|
|
with open(os.path.join(self.basedir, "port")) as f:
|
|
|
|
port = f.read().strip()
|
|
|
|
except EnvironmentError:
|
|
|
|
raise ValueError("Unable to find 'port' in BASEDIR, please rebuild your stats-gatherer")
|
|
|
|
|
|
|
|
self.tub.listenOn(port)
|
|
|
|
self.tub.setLocation(location)
|
2008-11-18 08:46:20 +00:00
|
|
|
ff = os.path.join(self.basedir, self.furl_file)
|
|
|
|
self.gatherer_furl = self.tub.registerReference(self.stats_gatherer,
|
|
|
|
furlFile=ff)
|