Personally, I can't stand working without a VCS hook anymore. It saves me so much time
in context switches from revisiting and revising PRs when I see that the CI run is red.
Much better to fail before I push so I can revise while the relevant changes are fresh
in my head.
In order to start using this, one has to run `$ make build` first. Should I add that
and make other documentation changes in the wiki related to my changes in this PR?
There's no need in almost all cases to run the tests both under the coverage collector
and without it. This fixes the default set of tests to avoid that. Specifically, don't
run tests under the coverage collector by default for all environments since we don't
capture any error or failure conditions on reporting coverage anyways.
While the XML coverage report is useful for consumption by other tools, such as
currently by codecov.io in CI, it's not very useful for humans reviewing the immediate
impact of changes on coverage during local development or while monitoring CI output. I
don't think running the text report takes much more time so I don't see a downside
here.
The [Patches Trac Wiki page](https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Patches) says
that users should run the `codechecks` tox environment, so this change runs it be
default as the full tox test suite eliminating the extra step.
On systems where the default Python is Python 3 (such as on recent Debian/Ubuntu
versions), then `$ tox -e codechecks` has a ton of failures related to Python 3
compatibility. This explicitly forces it to use Python 2.7 until we have Python 3
compatibility.
Without this, git fails underneath towncrier with an "error: Could not
expand include path '~/.gitcinclude'".
See: https://stackoverflow.com/q/36908041
I had added similar for `tox -e py36` in d25c8b1a.
Running "tox -e integration" takes a while. It would be helpful to
run tests from just one file. With this change, we can do that, like
so:
$ tox -e integration -- integration/test_web.py
Or even just one test, like so:
$ tox -e integration -- integration/test_web.py::test_index
With this investigating failing integration tests will be a little
easier, hopefully.
Fixes: ticket:3285