The value of `classified`, `bits_new`, and `cksum`, were not always
correctly maintained.
1. In the past, `afl->queue_top->exec_cksum` was always assigned when
`add_to_queue`, however it became conditional since cd5764170595.
This doesn't change correctness because calibrate_case() will
calculate the checksum. However, this mean one calibration run is
wasted.
2. Sometimes `classified` is set incorrectly.
For example, this code snippet
```
new_bits = has_new_bits_unclassified(afl, afl->virgin_bits);
classified = 1;
```
should be changed to
```
new_bits = has_new_bits_unclassified(afl, afl->virgin_bits);
if (new_bits) classified = 1;
```
This commit fixed above issues and use macros to make the code easier to
understand. This should prevent to forget to set classified in the
future (like the bug fixed by 30c93d132166).
The macros also defers the calculations to where the values are really
needed. This could save cpu if the code returns earlier. For example,
if a case is timeout first and not timeout the second time, the current
code does classify_counts, which is not always needed.
Assume we have one main node and N secondary nodes in a parallel
fuzzing campaign. Every time the main node found a new case, the case
will be synced to all secondary nodes. Later when the main node sync,
the main node need to run the file again to see if the file is
interesting because they are "new" cases on the secondary nodes.
In other words, for one new case, the main node has to run the redundent
test N times. This is wasteful and slowed down the progress of main
node.
The wasteful issue on secondary nodes is acceptable because we can run
more secondary nodes to mitigate the inefficiency. OTOH, increasing the
number of secondary nodes slow down the main node further.