This section explains how to apply random load to nodes to stress test them. It also allows the specification of disruptions that strain different resources, allowing us to inspect the nodes' behaviour under extreme conditions.
The load-testing framework is incomplete and is not part of CI currently, but the basic pieces are there.
Configuration of the load testing cluster
-----------------------------------------
The load-testing framework currently assumes the following about the node cluster:
* The nodes are managed as a systemd service
* The node directories are the same across the cluster
* The messaging ports are the same across the cluster
* The executing identity of the load-test has SSH access to all machines
* There is a single network map service node
* There is a single notary node
* Some disruptions also assume other tools (like openssl) to be present
Note that these points could and should be relaxed as needed.
The load test Main expects a single command line argument that points to a configuration file specifying the cluster hosts and optional overrides for the default configuration:
This means every 5-10 seconds at least one randomly chosen nodes' cores will be spinning 100% for 10 seconds.
How to write a load test
------------------------
A load test is basically defined by a random datastructure generator that specifies a unit of work a node should perform, a function that performs this work, and a function that predicts what state the node should end up in by doing so:
``LoadTest`` is parameterised over ``T``, the unit of work, and ``S``, the state type that aims to track remote node states. As an example let's look at the Self Issue test. This test simply creates Cash Issues from nodes to themselves, and then checks the vault to see if the numbers add up:
The unit of work ``SelfIssueCommand`` simply holds an Issue and a handle to a node where the issue should be submitted. The ``generate`` method should provide a generator for these.
The state ``SelfIssueState`` then holds a map from node identities to a Long that describes the sum quantity of the generated issues (we fixed the currency to be USD).
The invariant we want to hold then simply is: The sum of submitted Issues should be the sum of the quantities in the vaults.
The ``interpret`` function should take a ``SelfIssueCommand`` and update ``SelfIssueState`` to reflect the change we're expecting in the remote nodes. In our case this will simply be adding the issued amount to the corresponding node's Long.
The ``execute`` function should perform the action on the cluster. In our case it will simply take the node handle and submit an RPC request for the Issue.
The ``gatherRemoteState`` function should check the actual remote nodes' states and see whether they conflict with our local predictions (and should throw if they do). This function deserves its own paragraph.
``gatherRemoteState`` gets as input handles to all the nodes, and the current predicted state, or null if this is the initial gathering.
The reason it gets the previous state boils down to allowing non-deterministic predictions about the nodes' remote states. Say some piece of work triggers an asynchronous notification of a node. We need to account both for the case when the node hasn't received the notification and for the case when it has. In these cases ``S`` should somehow represent a collection of possible states, and ``gatherRemoteState`` should "collapse" the collection based on the observations it makes. Of course we don't need this for the simple case of the Self Issue test.