block_tester: limit batching in sequential test

So far the condition whether to spawn a new job or not depended on
the amount of data already processed. This could lead to spawning
more jobs than necessary if batching is used and in return could
result in creating invalid requests in case the tested block session
is not large enough.

In addition to checking the amount of data the test now stores the
number of the last block and checks if the current request is in
range. This properly limits the total amount of requests.

Issue #3781.
This commit is contained in:
Josef Söntgen 2020-05-24 15:14:15 +02:00 committed by Norman Feske
parent b5f0c07eb3
commit 103ae9df4a

View File

@ -29,6 +29,8 @@ struct Test::Sequential : Test_base
size_t const _size = _node.attribute_value("size", Number_of_bytes());
size_t const _length = _node.attribute_value("length", Number_of_bytes());
block_number_t _end = 0;
Block::Operation::Type const _op_type = _node.attribute_value("write", false)
? Block::Operation::Type::WRITE
: Block::Operation::Type::READ;
@ -49,11 +51,12 @@ struct Test::Sequential : Test_base
_size_in_blocks = _size / _info.block_size;
_length_in_blocks = _length / _info.block_size;
_end = _start + _length_in_blocks;
}
void _spawn_job() override
{
if (_bytes >= _length)
if (_bytes >= _length || _start >= _end)
return;
_job_cnt++;