Added Coverage Estimation

Signed-off-by: jonas <jonas.eppard@de.bosch.com>
This commit is contained in:
jonas
2023-07-27 15:34:21 +02:00
parent 55d696fbae
commit b54d3ed414
13 changed files with 673 additions and 38 deletions

View File

@ -0,0 +1,39 @@
# Coverage Estimation in AFL++
This file describes the Coverage Estimation of AFL++. or general information about AFL++, see
[README.md](../README.md).
## Table of Content
* [Introduction](#1-introduction)
* [Setup](#2-setup)
* [Status Screen extension](#3-status-screen-extension)
## 1 Introduction
The Coverage Estimation inside AFL++ is based on Path Coverage. It used STADS (Security Testing As Discovery of Species) to use Species Richness estimators for Coverage estimation.
The estimated coverage should help developers when to stop a Fuzzing campaign.
The coverage estimation can only be estimated over fuzzable/reachable paths.
Coverage estimation is not tested on multiple Fuzzing instances (-M/-S Options). It's also not tested on resuming a fuzz run (AFL_AUTORESUME, -i -).
## 2 Setup
To use coverage estimation you don't have to change your workflow, just add following environment variables:
* Set `AFL_CODE_COVERAGE` to enable Coverage Estimation.
* Consider Setting `AFL_N_FUZZ_SIZE` to something bigger then (1 << 21)(default) to mitigate (Re-)Hash collisions
* Consider the use of `AFL_CRASH_ON_HASH_COLLISION` if (slightly) incorrect coverage estimation is worse then a abort
* If the Coverage estimation should update more often change `COVERAGE_INTERVAL` in [config.h](../config.h) (This requires rebuilding of AFL++)
More information's about these environment variables in [env_variables.md](./env_variables.md).
## 3 Status Screen extension
The status screen will be extended with following box:
```
+- code coverage information ------------------------+
| coverage : 57.12% - 63.21% |
| collision probability : 1.02% |
+----------------------------------------------------+
```
* coverage - This is the estimated path coverage. The first number is a lower bound estimate.
The second number is a upper bound estimate. It's only possible to estimate the fuzzable/reachable paths.
If the coverage is very fast very high you either fuzzing a simple target or don't have a good corpus.
* collision propability - This is a estimate for the probability of Hash Collisions. If this number gets high you should increase `AFL_N_FUZZ_SIZE`. Hash collisions will cause errors in coverage estimation.
If `AFL_CRASH_ON_HASH_COLLISION` is set afl-fuzz will abort on a detected Hash collision.

View File

@ -314,6 +314,10 @@ mode.
The main fuzzer binary accepts several options that disable a couple of sanity The main fuzzer binary accepts several options that disable a couple of sanity
checks or alter some of the more exotic semantics of the tool: checks or alter some of the more exotic semantics of the tool:
- Coverage estimation only: `AFL_ABUNDANT_CUT_OFF` describes the cut-off for
species richness estimators. Default is 10. The Value should only be changed
for research.
- Setting `AFL_AUTORESUME` will resume a fuzz run (same as providing `-i -`) - Setting `AFL_AUTORESUME` will resume a fuzz run (same as providing `-i -`)
for an existing out folder, even if a different `-i` was provided. Without for an existing out folder, even if a different `-i` was provided. Without
this setting, afl-fuzz will refuse execution for a long-fuzzed out dir. this setting, afl-fuzz will refuse execution for a long-fuzzed out dir.
@ -327,6 +331,11 @@ checks or alter some of the more exotic semantics of the tool:
(`-i in`). This is an important feature to set when resuming a fuzzing (`-i in`). This is an important feature to set when resuming a fuzzing
session. session.
- `AFL_CODE_COVERAGE` will enable code coverage estimation. See also
[coverage_estimation.md](./coverage_estimation.md). Related enviroment
variables: `AFL_ABUNDANT_CUT_OFF`, `AFL_CRASH_ON_HASH_COLLISION` and
`AFL_N_FUZZ_SIZE`
- Setting `AFL_CRASH_EXITCODE` sets the exit code AFL++ treats as crash. For - Setting `AFL_CRASH_EXITCODE` sets the exit code AFL++ treats as crash. For
example, if `AFL_CRASH_EXITCODE='-1'` is set, each input resulting in a `-1` example, if `AFL_CRASH_EXITCODE='-1'` is set, each input resulting in a `-1`
return code (i.e. `exit(-1)` got called), will be treated as if a crash had return code (i.e. `exit(-1)` got called), will be treated as if a crash had
@ -407,6 +416,11 @@ checks or alter some of the more exotic semantics of the tool:
Others need not apply, unless they also want to disable the Others need not apply, unless they also want to disable the
`/proc/sys/kernel/core_pattern` check. `/proc/sys/kernel/core_pattern` check.
- Coverage estimation only: `AFL_CRASH_ON_HASH_COLLISION` will crash on detected
(Re)Hash collisions. If this is not set only a warning is displayed instead of
Aborting the Fuzzing campaign. You could also increase `AFL_N_FUZZ_SIZE` to
mitigate the chance of a (Re)Hash collision.
- If afl-fuzz encounters an incorrect fuzzing setup during a fuzzing session - If afl-fuzz encounters an incorrect fuzzing setup during a fuzzing session
(not at startup), it will terminate. If you do not want this, then you can (not at startup), it will terminate. If you do not want this, then you can
set `AFL_IGNORE_PROBLEMS`. If you additionally want to also ignore coverage set `AFL_IGNORE_PROBLEMS`. If you additionally want to also ignore coverage
@ -450,6 +464,10 @@ checks or alter some of the more exotic semantics of the tool:
there is a 1 in 201 chance, that one of the dictionary entries will not be there is a 1 in 201 chance, that one of the dictionary entries will not be
used directly. used directly.
- Coverage estimation only: `AFL_N_FUZZ_SIZE` sets the amount of Path Hashes that
can be stored. Default (1 << 21). Think about increasing this value to mitigate
(Re)Hash collisions. Upper Bound is (1 << 64 - 1).
- Setting `AFL_NO_AFFINITY` disables attempts to bind to a specific CPU core - Setting `AFL_NO_AFFINITY` disables attempts to bind to a specific CPU core
on Linux systems. This slows things down, but lets you run more instances of on Linux systems. This slows things down, but lets you run more instances of
afl-fuzz than would be prudent (if you really want to). afl-fuzz than would be prudent (if you really want to).

View File

@ -130,6 +130,9 @@
// Little helper to access the ptr to afl->##name_buf - for use in afl_realloc. // Little helper to access the ptr to afl->##name_buf - for use in afl_realloc.
#define AFL_BUF_PARAM(name) ((void **)&afl->name##_buf) #define AFL_BUF_PARAM(name) ((void **)&afl->name##_buf)
#define LARGE_INDEX(array, index, size, item_size) \
array[(u64)((index) / (size / item_size))][(index) % (size / item_size)]
#ifdef WORD_SIZE_64 #ifdef WORD_SIZE_64
#define AFL_RAND_RETURN u64 #define AFL_RAND_RETURN u64
#else #else
@ -540,7 +543,9 @@ typedef struct afl_state {
expand_havoc, /* perform expensive havoc after no find */ expand_havoc, /* perform expensive havoc after no find */
cycle_schedules, /* cycle power schedules? */ cycle_schedules, /* cycle power schedules? */
old_seed_selection, /* use vanilla afl seed selection */ old_seed_selection, /* use vanilla afl seed selection */
reinit_table; /* reinit the queue weight table */ reinit_table, /* reinit the queue weight table */
coverage_estimation, /* Code Coverage Estimation Mode? */
crash_on_hash_collision; /* Ignore Hash collisions */
u8 *virgin_bits, /* Regions yet untouched by fuzzing */ u8 *virgin_bits, /* Regions yet untouched by fuzzing */
*virgin_tmout, /* Bits we haven't seen in tmouts */ *virgin_tmout, /* Bits we haven't seen in tmouts */
@ -552,8 +557,28 @@ typedef struct afl_state {
u8 *var_bytes; /* Bytes that appear to be variable */ u8 *var_bytes; /* Bytes that appear to be variable */
#define N_FUZZ_SIZE (1 << 21) u64 n_fuzz_size;
u32 *n_fuzz; u32 **n_fuzz;
double n_fuzz_fill; /* Fill level of n_fuzz (0-1) */
u32 *path_frequenzy; /* Frequenzies of Paths for Coverage Estimation */
u8 abundant_cut_off; /* Cut-off Value for estimators */
u64 max_path_number, /* Number of times most viewed path(s) are viewed */
max_path_count, /* Count of paths wich are most viewed */
second_max_path_number,
second_max_path_count, /* Count of paths wich are 2nd most viewed */
abundant_paths; /* Number of abundant code paths */
u32 coverage_counter, /* Counter when to calculate coverage */
num_detected_collisions; /* Number of detected Collisions */
double upper_coverage_estimate, /* Coverage Estimation lower in % */
lower_coverage_estimate; /* Coverage Estimation upper in % */
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
FILE *coverage_log_file; /* File to log coverage */
u64 total_paths, /* Total Paths saved for logging */
next_save_time; /* Time to save Paths again */
u32 **n_fuzz_logged;
#endif
volatile u8 stop_soon, /* Ctrl-C pressed? */ volatile u8 stop_soon, /* Ctrl-C pressed? */
clear_screen; /* Window resized? */ clear_screen; /* Window resized? */
@ -1141,6 +1166,9 @@ void destroy_extras(afl_state_t *);
void load_stats_file(afl_state_t *); void load_stats_file(afl_state_t *);
void write_setup_file(afl_state_t *, u32, char **); void write_setup_file(afl_state_t *, u32, char **);
void write_stats_file(afl_state_t *, u32, double, double, double); void write_stats_file(afl_state_t *, u32, double, double, double);
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
void write_coverage_file(afl_state_t *);
#endif
void maybe_update_plot_file(afl_state_t *, u32, double, double); void maybe_update_plot_file(afl_state_t *, u32, double, double);
void write_queue_stats(afl_state_t *); void write_queue_stats(afl_state_t *);
void show_stats(afl_state_t *); void show_stats(afl_state_t *);

View File

@ -298,6 +298,15 @@
#define PLOT_UPDATE_SEC 5 #define PLOT_UPDATE_SEC 5
#define QUEUE_UPDATE_SEC 1800 #define QUEUE_UPDATE_SEC 1800
/* Max Interval for Coverage Estimation in UI Updates - UI Updates is set by
* UI_TARGET_HZ */
#define COVERAGE_INTERVAL 10 /* Roughly every 2 seconds */
/* Write File to log Data for coverage estimation */
#define COVERAGE_ESTIMATION_LOGGING 1
/* Smoothing divisor for CPU load and exec speed stats (1 - no smoothing). */ /* Smoothing divisor for CPU load and exec speed stats (1 - no smoothing). */
#define AVG_SMOOTHING 16 #define AVG_SMOOTHING 16

View File

@ -16,6 +16,7 @@ static char *afl_environment_deprecated[] = {
static char *afl_environment_variables[] = { static char *afl_environment_variables[] = {
"AFL_ABUNDANT_CUT_OFF",
"AFL_ALIGNED_ALLOC", "AFL_ALIGNED_ALLOC",
"AFL_ALLOW_TMP", "AFL_ALLOW_TMP",
"AFL_ANALYZE_HEX", "AFL_ANALYZE_HEX",
@ -30,11 +31,13 @@ static char *afl_environment_variables[] = {
"AFL_CMIN_ALLOW_ANY", "AFL_CMIN_ALLOW_ANY",
"AFL_CMIN_CRASHES_ONLY", "AFL_CMIN_CRASHES_ONLY",
"AFL_CMPLOG_ONLY_NEW", "AFL_CMPLOG_ONLY_NEW",
"AFL_CODE_COVERAGE",
"AFL_CODE_END", "AFL_CODE_END",
"AFL_CODE_START", "AFL_CODE_START",
"AFL_COMPCOV_BINNAME", "AFL_COMPCOV_BINNAME",
"AFL_COMPCOV_LEVEL", "AFL_COMPCOV_LEVEL",
"AFL_CRASH_EXITCODE", "AFL_CRASH_EXITCODE",
"AFL_CRASH_ON_HASH_COLLISION",
"AFL_CRASHING_SEEDS_AS_NEW_CRASH", "AFL_CRASHING_SEEDS_AS_NEW_CRASH",
"AFL_CUSTOM_MUTATOR_LIBRARY", "AFL_CUSTOM_MUTATOR_LIBRARY",
"AFL_CUSTOM_MUTATOR_ONLY", "AFL_CUSTOM_MUTATOR_ONLY",
@ -169,6 +172,7 @@ static char *afl_environment_variables[] = {
"AFL_LLVM_LTO_DONTWRITEID", "AFL_LLVM_LTO_DONTWRITEID",
"AFL_LLVM_LTO_SKIPINIT" "AFL_LLVM_LTO_SKIPINIT"
"AFL_LLVM_LTO_STARTID", "AFL_LLVM_LTO_STARTID",
"AFL_N_FUZZ_SIZE",
"AFL_NO_ARITH", "AFL_NO_ARITH",
"AFL_NO_AUTODICT", "AFL_NO_AUTODICT",
"AFL_NO_BUILTIN", "AFL_NO_BUILTIN",

View File

@ -1,3 +1,4 @@
#include <features.h> #include <features.h>
#ifndef __GLIBC__ #ifndef __GLIBC__

View File

@ -474,7 +474,8 @@ save_if_interesting(afl_state_t *afl, void *mem, u32 len, u8 fault) {
/* Generating a hash on every input is super expensive. Bad idea and should /* Generating a hash on every input is super expensive. Bad idea and should
only be used for special schedules */ only be used for special schedules */
if (unlikely(afl->schedule >= FAST && afl->schedule <= RARE)) { if (unlikely((afl->schedule >= FAST && afl->schedule <= RARE) ||
afl->coverage_estimation)) {
classify_counts(&afl->fsrv); classify_counts(&afl->fsrv);
classified = 1; classified = 1;
@ -483,8 +484,91 @@ save_if_interesting(afl_state_t *afl, void *mem, u32 len, u8 fault) {
cksum = hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST); cksum = hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST);
/* Saturated increment */ /* Saturated increment */
if (likely(afl->n_fuzz[cksum % N_FUZZ_SIZE] < 0xFFFFFFFF)) if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
afl->n_fuzz[cksum % N_FUZZ_SIZE]++; sizeof(u32)) < 0xFFFFFFFF)
++LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32));
if (afl->coverage_estimation) {
if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) <= afl->abundant_cut_off + 1U) {
if ((LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) != 1) &&
likely(afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) -
2] > 0))
--afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) -
2];
if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) <= afl->abundant_cut_off) {
if (likely(afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) -
1] < UINT32_MAX)) {
++afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) -
1];
}
} else {
++afl->abundant_paths;
}
}
if (unlikely(afl->max_path_number == 0)) {
afl->max_path_number = 1;
afl->max_path_count = 1;
} else if (unlikely(LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) >=
afl->second_max_path_number)) {
if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) == afl->second_max_path_number)
++afl->second_max_path_count;
else if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) == afl->max_path_number)
++afl->max_path_count;
else if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) > afl->max_path_number) {
if (afl->max_path_count > 1) {
afl->second_max_path_count = afl->max_path_count - 1;
afl->second_max_path_number = afl->max_path_number;
}
afl->max_path_number = LARGE_INDEX(
afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC, sizeof(u32));
afl->max_path_count = 1;
} else /* second max < n_fuzz < max*/ {
afl->second_max_path_count = 1;
afl->second_max_path_number = LARGE_INDEX(
afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC, sizeof(u32));
}
}
}
} }
@ -593,12 +677,43 @@ save_if_interesting(afl_state_t *afl, void *mem, u32 len, u8 fault) {
/* For AFLFast schedules we update the new queue entry */ /* For AFLFast schedules we update the new queue entry */
if (likely(cksum)) { if (likely(cksum)) {
afl->queue_top->n_fuzz_entry = cksum % afl->n_fuzz_size;
if (afl->coverage_estimation) {
afl->queue_top->n_fuzz_entry = cksum % N_FUZZ_SIZE; if (unlikely(LARGE_INDEX(afl->n_fuzz, afl->queue_top->n_fuzz_entry,
afl->n_fuzz[afl->queue_top->n_fuzz_entry] = 1; MAX_ALLOC, sizeof(u32)) > 1)) {
if (afl->crash_on_hash_collision)
FATAL(
"Hash collision on n_fuzz increase AFL_N_FUZZ_SIZE or ignore "
"with removing AFL_CRASH_ON_HASH_COLLISION");
else
WARNF("Hash collision on n_fuzz increase AFL_N_FUZZ_SIZE! (%lu)",
(unsigned long)++afl->num_detected_collisions);
if (LARGE_INDEX(afl->n_fuzz, afl->queue_top->n_fuzz_entry, MAX_ALLOC,
sizeof(u32)) == 0) {
if (likely(afl->path_frequenzy[0] < UINT32_MAX)) {
++afl->path_frequenzy[0];
} }
}
}
}
LARGE_INDEX(afl->n_fuzz, afl->queue_top->n_fuzz_entry, MAX_ALLOC,
sizeof(u32)) = 1;
}
/* due to classify counts we have to recalculate the checksum */
afl->queue_top->exec_cksum =
hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST);
/* Try to calibrate inline; this also calls update_bitmap_score() when /* Try to calibrate inline; this also calls update_bitmap_score() when
successful. */ successful. */
res = calibrate_case(afl, afl->queue_top, mem, afl->queue_cycle - 1, 0); res = calibrate_case(afl, afl->queue_top, mem, afl->queue_cycle - 1, 0);

View File

@ -1841,6 +1841,12 @@ static void handle_existing_out_dir(afl_state_t *afl) {
if (delete_files(fn, CASE_PREFIX)) { goto dir_cleanup_failed; } if (delete_files(fn, CASE_PREFIX)) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
fn = alloc_printf("%s/path_data", afl->out_dir);
if (delete_files(fn, NULL)) { goto dir_cleanup_failed; }
ck_free(fn);
#endif
/* All right, let's do <afl->out_dir>/crashes/id:* and /* All right, let's do <afl->out_dir>/crashes/id:* and
* <afl->out_dir>/hangs/id:*. */ * <afl->out_dir>/hangs/id:*. */
@ -1974,6 +1980,10 @@ static void handle_existing_out_dir(afl_state_t *afl) {
if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; } if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
fn = alloc_printf("%s/coverage_estimation", afl->out_dir);
if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; }
ck_free(fn);
fn = alloc_printf("%s/cmdline", afl->out_dir); fn = alloc_printf("%s/cmdline", afl->out_dir);
if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; } if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
@ -2182,6 +2192,35 @@ void setup_dirs_fds(afl_state_t *afl) {
"pending_total, pending_favs, map_size, saved_crashes, " "pending_total, pending_favs, map_size, saved_crashes, "
"saved_hangs, max_depth, execs_per_sec, total_execs, edges_found\n"); "saved_hangs, max_depth, execs_per_sec, total_execs, edges_found\n");
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
tmp = alloc_printf("%s/path_data", afl->out_dir);
if (mkdir(tmp, 0700)) {
if (errno != EEXIST)
PFATAL("Unable to create '%s'", tmp);
else {
}
}
ck_free(tmp);
tmp = alloc_printf("%s/coverage_estimation", afl->out_dir);
fd = open(tmp, O_WRONLY | O_CREAT | O_EXCL, DEFAULT_PERMISSION);
if (fd < 0) { PFATAL("Unable to create '%s'", tmp); }
ck_free(tmp);
afl->coverage_log_file = fdopen(fd, "w");
if (!afl->coverage_log_file) { PFATAL("fdopen() failed"); }
fprintf(afl->coverage_log_file,
"# relative_time, total_paths, abundant_paths, lower_estimate, "
"higher_estimate, max_path_number, max_path_count, "
"second_max_path_number, "
"second_max_path_count, path_frequenzies...\n");
fflush(afl->coverage_log_file);
#endif
} else { } else {
int fd = open(tmp, O_WRONLY | O_CREAT, DEFAULT_PERMISSION); int fd = open(tmp, O_WRONLY | O_CREAT, DEFAULT_PERMISSION);

View File

@ -415,7 +415,10 @@ u8 fuzz_one_original(afl_state_t *afl) {
afl->queue_cur->perf_score, afl->queue_cur->weight, afl->queue_cur->perf_score, afl->queue_cur->weight,
afl->queue_cur->favored, afl->queue_cur->was_fuzzed, afl->queue_cur->favored, afl->queue_cur->was_fuzzed,
afl->queue_cur->exec_us, afl->queue_cur->exec_us,
likely(afl->n_fuzz) ? afl->n_fuzz[afl->queue_cur->n_fuzz_entry] : 0, likely(afl->n_fuzz)
? LARGE_INDEX(afl->n_fuzz, afl->queue_cur->n_fuzz_entry, MAX_ALLOC,
sizeof(u32))
: 0,
afl->queue_cur->bitmap_size, afl->queue_cur->is_ascii, time_tmp); afl->queue_cur->bitmap_size, afl->queue_cur->is_ascii, time_tmp);
fflush(stdout); fflush(stdout);

View File

@ -68,7 +68,8 @@ double compute_weight(afl_state_t *afl, struct queue_entry *q,
if (likely(afl->schedule >= FAST && afl->schedule <= RARE)) { if (likely(afl->schedule >= FAST && afl->schedule <= RARE)) {
u32 hits = afl->n_fuzz[q->n_fuzz_entry]; u32 hits =
LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32));
if (likely(hits)) { weight /= (log10(hits) + 1); } if (likely(hits)) { weight /= (log10(hits) + 1); }
} }
@ -704,7 +705,8 @@ void update_bitmap_score(afl_state_t *afl, struct queue_entry *q) {
if (unlikely(afl->schedule >= FAST && afl->schedule < RARE)) if (unlikely(afl->schedule >= FAST && afl->schedule < RARE))
fuzz_p2 = 0; // Skip the fuzz_p2 comparison fuzz_p2 = 0; // Skip the fuzz_p2 comparison
else if (unlikely(afl->schedule == RARE)) else if (unlikely(afl->schedule == RARE))
fuzz_p2 = next_pow2(afl->n_fuzz[q->n_fuzz_entry]); fuzz_p2 = next_pow2(
LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32)));
else else
fuzz_p2 = q->fuzz_level; fuzz_p2 = q->fuzz_level;
@ -730,8 +732,9 @@ void update_bitmap_score(afl_state_t *afl, struct queue_entry *q) {
u64 top_rated_fav_factor; u64 top_rated_fav_factor;
u64 top_rated_fuzz_p2; u64 top_rated_fuzz_p2;
if (unlikely(afl->schedule >= FAST && afl->schedule <= RARE)) if (unlikely(afl->schedule >= FAST && afl->schedule <= RARE))
top_rated_fuzz_p2 = top_rated_fuzz_p2 = next_pow2(
next_pow2(afl->n_fuzz[afl->top_rated[i]->n_fuzz_entry]); LARGE_INDEX(afl->n_fuzz, afl->top_rated[i]->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)));
else else
top_rated_fuzz_p2 = afl->top_rated[i]->fuzz_level; top_rated_fuzz_p2 = afl->top_rated[i]->fuzz_level;
@ -1032,7 +1035,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
if (likely(!afl->queue_buf[i]->disabled)) { if (likely(!afl->queue_buf[i]->disabled)) {
fuzz_mu += log2(afl->n_fuzz[afl->queue_buf[i]->n_fuzz_entry]); fuzz_mu +=
log2(LARGE_INDEX(afl->n_fuzz, afl->queue_buf[i]->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)));
n_items++; n_items++;
} }
@ -1043,7 +1048,8 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
fuzz_mu = fuzz_mu / n_items; fuzz_mu = fuzz_mu / n_items;
if (log2(afl->n_fuzz[q->n_fuzz_entry]) > fuzz_mu) { if (log2(LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC,
sizeof(u32)) > fuzz_mu)) {
/* Never skip favourites */ /* Never skip favourites */
if (!q->favored) factor = 0; if (!q->favored) factor = 0;
@ -1058,7 +1064,8 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
// Don't modify unfuzzed seeds // Don't modify unfuzzed seeds
if (!q->fuzz_level) break; if (!q->fuzz_level) break;
switch ((u32)log2(afl->n_fuzz[q->n_fuzz_entry])) { switch ((u32)log2(
LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32)))) {
case 0 ... 1: case 0 ... 1:
factor = 4; factor = 4;
@ -1097,7 +1104,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
// Don't modify perf_score for unfuzzed seeds // Don't modify perf_score for unfuzzed seeds
if (!q->fuzz_level) break; if (!q->fuzz_level) break;
factor = q->fuzz_level / (afl->n_fuzz[q->n_fuzz_entry] + 1); factor = q->fuzz_level / (LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)) +
1);
break; break;
case QUAD: case QUAD:
@ -1105,7 +1114,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
if (!q->fuzz_level) break; if (!q->fuzz_level) break;
factor = factor =
q->fuzz_level * q->fuzz_level / (afl->n_fuzz[q->n_fuzz_entry] + 1); q->fuzz_level * q->fuzz_level /
(LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32)) +
1);
break; break;
case MMOPT: case MMOPT:
@ -1130,7 +1141,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
perf_score += (q->tc_ref * 10); perf_score += (q->tc_ref * 10);
// the more often fuzz result paths are equal to this queue entry, // the more often fuzz result paths are equal to this queue entry,
// reduce its value // reduce its value
perf_score *= (1 - (double)((double)afl->n_fuzz[q->n_fuzz_entry] / perf_score *=
(1 - (double)((double)LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)) /
(double)afl->fsrv.total_execs)); (double)afl->fsrv.total_execs));
break; break;

View File

@ -993,7 +993,6 @@ u8 trim_case(afl_state_t *afl, struct queue_entry *q, u8 *in_buf) {
} }
/* Since this can be slow, update the screen every now and then. */ /* Since this can be slow, update the screen every now and then. */
if (!(trim_exec++ % afl->stats_update_freq)) { show_stats(afl); } if (!(trim_exec++ % afl->stats_update_freq)) { show_stats(afl); }
++afl->stage_cur; ++afl->stage_cur;

View File

@ -286,7 +286,6 @@ void write_stats_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
#ifndef __HAIKU__ #ifndef __HAIKU__
if (getrusage(RUSAGE_CHILDREN, &rus)) { rus.ru_maxrss = 0; } if (getrusage(RUSAGE_CHILDREN, &rus)) { rus.ru_maxrss = 0; }
#endif #endif
fprintf( fprintf(
f, f,
"start_time : %llu\n" "start_time : %llu\n"
@ -332,7 +331,8 @@ void write_stats_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
"afl_version : " VERSION "afl_version : " VERSION
"\n" "\n"
"target_mode : %s%s%s%s%s%s%s%s%s%s\n" "target_mode : %s%s%s%s%s%s%s%s%s%s\n"
"command_line : %s\n", "command_line : %s\n"
"hash_collisions : %lu",
(afl->start_time - afl->prev_run_time) / 1000, cur_time / 1000, (afl->start_time - afl->prev_run_time) / 1000, cur_time / 1000,
(afl->prev_run_time + cur_time - afl->start_time) / 1000, (u32)getpid(), (afl->prev_run_time + cur_time - afl->start_time) / 1000, (u32)getpid(),
afl->queue_cycle ? (afl->queue_cycle - 1) : 0, afl->cycles_wo_finds, afl->queue_cycle ? (afl->queue_cycle - 1) : 0, afl->cycles_wo_finds,
@ -381,7 +381,7 @@ void write_stats_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
afl->persistent_mode || afl->deferred_mode) afl->persistent_mode || afl->deferred_mode)
? "" ? ""
: "default", : "default",
afl->orig_cmdline); afl->orig_cmdline, (unsigned long)afl->num_detected_collisions);
/* ignore errors */ /* ignore errors */
@ -448,6 +448,62 @@ void write_queue_stats(afl_state_t *afl) {
#endif #endif
/* Write coverage file */
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
void write_coverage_file(afl_state_t *afl) {
char *tmp = alloc_printf("%s/path_data/time:%llu", afl->out_dir,
(unsigned long long)afl->next_save_time / 1000);
s32 fd = open(tmp, O_WRONLY | O_CREAT | O_EXCL, DEFAULT_PERMISSION);
if (unlikely(fd < 0)) { PFATAL("Unable to create '%s'", tmp); }
FILE *current_file = fdopen(fd, "w");
// Write file header
fprintf(current_file, "# path hash, number of times path is fuzzed\n");
for (u64 i = 0; i < afl->n_fuzz_size; i++) {
if (LARGE_INDEX(afl->n_fuzz, i, MAX_ALLOC, sizeof(u32)) !=
LARGE_INDEX(afl->n_fuzz_logged, i, MAX_ALLOC, sizeof(u32))) {
fprintf(
current_file, "%llu,%lu\n", (unsigned long long)i,
(unsigned long)LARGE_INDEX(afl->n_fuzz, i, MAX_ALLOC, sizeof(u32)));
LARGE_INDEX(afl->n_fuzz_logged, i, MAX_ALLOC, sizeof(u32)) =
LARGE_INDEX(afl->n_fuzz, i, MAX_ALLOC, sizeof(u32));
}
}
fflush(current_file);
fclose(current_file);
if (afl->next_save_time < 1000 * 60 * 15) {
// Save every 1 min
afl->next_save_time += 1000 * 60;
} else if (afl->next_save_time < 1000 * 60 * 60 * 6 /* 6h */) {
// Save every 15 min
afl->next_save_time += 1000 * 60 * 15;
} else if (afl->next_save_time < 1000 * 60 * 60 * 24 * 2 /* 2d */) {
// Save every 6h
afl->next_save_time += 1000 * 60 * 60 * 6;
} else {
// Save every 12h
afl->next_save_time += 1000 * 60 * 60 * 12;
}
return;
}
#endif
/* Update the plot file if there is a reason to. */ /* Update the plot file if there is a reason to. */
void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg, void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
@ -483,10 +539,9 @@ void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
/* Fields in the file: /* Fields in the file:
relative_time, afl->cycles_done, cur_item, corpus_count, corpus_not_fuzzed, relative_time, afl->cycles_done, cur_item, corpus_count,
favored_not_fuzzed, saved_crashes, saved_hangs, max_depth, corpus_not_fuzzed, favored_not_fuzzed, saved_crashes, saved_hangs,
execs_per_sec, edges_found */ max_depth, execs_per_sec, edges_found */
fprintf(afl->fsrv.plot_file, fprintf(afl->fsrv.plot_file,
"%llu, %llu, %u, %u, %u, %u, %0.02f%%, %llu, %llu, %u, %0.02f, %llu, " "%llu, %llu, %u, %u, %u, %u, %0.02f%%, %llu, %llu, %u, %0.02f, %llu, "
"%u\n", "%u\n",
@ -498,6 +553,37 @@ void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
fflush(afl->fsrv.plot_file); fflush(afl->fsrv.plot_file);
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
if (afl->coverage_estimation) {
/* Update log file for coverage estimation */
/*Fields in the file:
relative_time, total_paths, abundant_paths, lower_estimate,
higher_estimate, max_path_number, max_path_count, second_max_path_number
second_max_path_count, path_frequenzies... */
fprintf(afl->coverage_log_file,
"%llu, %llu, %llu, %0.02f%%, %0.02f%%, %llu, %llu, %llu, %llu",
((afl->prev_run_time + get_cur_time() - afl->start_time) / 1000),
afl->total_paths, afl->abundant_paths,
afl->lower_coverage_estimate * 100,
afl->upper_coverage_estimate * 100, afl->max_path_number,
afl->max_path_count, afl->second_max_path_number,
afl->second_max_path_count);
for (u8 i = 0; i < afl->abundant_cut_off; i++) {
fprintf(afl->coverage_log_file, ", %u", afl->path_frequenzy[i]);
}
fprintf(afl->coverage_log_file, "\n");
fflush(afl->coverage_log_file);
}
#endif
} }
/* Check terminal dimensions after resize. */ /* Check terminal dimensions after resize. */
@ -511,8 +597,16 @@ static void check_term_size(afl_state_t *afl) {
if (ioctl(1, TIOCGWINSZ, &ws)) { return; } if (ioctl(1, TIOCGWINSZ, &ws)) { return; }
if (ws.ws_row == 0 || ws.ws_col == 0) { return; } if (ws.ws_row == 0 || ws.ws_col == 0) { return; }
if (afl->coverage_estimation) {
if (ws.ws_row < 26 || ws.ws_col < 79) { afl->term_too_small = 1; }
} else {
if (ws.ws_row < 24 || ws.ws_col < 79) { afl->term_too_small = 1; } if (ws.ws_row < 24 || ws.ws_col < 79) { afl->term_too_small = 1; }
}
} }
/* A spiffy retro stats screen! This is called every afl->stats_update_freq /* A spiffy retro stats screen! This is called every afl->stats_update_freq
@ -520,6 +614,123 @@ static void check_term_size(afl_state_t *afl) {
void show_stats(afl_state_t *afl) { void show_stats(afl_state_t *afl) {
if (afl->coverage_estimation) {
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
u64 cur_time = get_cur_time();
if (unlikely(cur_time - afl->start_time > afl->next_save_time)) {
write_coverage_file(afl);
}
#endif
afl->coverage_counter++;
if (afl->coverage_counter >= COVERAGE_INTERVAL &&
afl->max_path_number >= afl->abundant_cut_off) {
afl->coverage_counter = 0;
u64 n_rare = 0, s_rare = 0, sum_i = 0;
for (u8 i = 0; i < afl->abundant_cut_off; i++) {
s_rare += afl->path_frequenzy[i];
n_rare += afl->path_frequenzy[i] * (i + 1);
sum_i += afl->path_frequenzy[i] * i * (i + 1);
}
u64 s_total = s_rare + afl->abundant_paths;
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
afl->total_paths = s_total;
#endif
afl->n_fuzz_fill = (double)s_total / afl->n_fuzz_size;
if (likely(n_rare)) {
u64 n_abundant = afl->fsrv.total_execs - n_rare;
if (unlikely(n_abundant > afl->fsrv.total_execs)) /* Check underflow*/ {
FATAL(
"Total number of Paths or Executions is less than rare"
"Executions");
}
double c_rare = 1 - (double)afl->path_frequenzy[0] / n_rare;
if (likely(n_rare != 10)) {
double s_lower_estimate = 0;
if (c_rare == 0) /* all singleton */ {
s_lower_estimate =
(((double)afl->fsrv.total_execs - 1) / afl->fsrv.total_execs *
afl->path_frequenzy[0] * (afl->path_frequenzy[0] - 1) / 2.0);
} else {
double variation_rare =
(s_rare / c_rare) * ((double)sum_i / (n_rare * (n_rare - 10))) -
1;
if (variation_rare < 0) variation_rare = 0;
s_lower_estimate = afl->abundant_paths + s_rare / c_rare +
afl->path_frequenzy[0] / c_rare * variation_rare;
}
afl->upper_coverage_estimate =
(double)s_total / (s_lower_estimate + s_total);
double pi_zero =
(double)s_lower_estimate / (s_lower_estimate + s_total);
if (pi_zero < 0.5) {
afl->lower_coverage_estimate =
s_total / ((double)2 * s_total - afl->max_path_count);
} else {
double p_max_minus_one =
(double)(s_total - afl->max_path_count) / s_total,
p_max_minus_two = (double)(s_total - afl->max_path_count -
afl->second_max_path_count) /
s_total;
double pi_max_minus_one = pi_zero + (1 - pi_zero) * p_max_minus_one,
pi_max_minus_two = pi_zero + (1 - pi_zero) * p_max_minus_two;
double normalisation_factor = 0;
if (p_max_minus_one == p_max_minus_two) {
normalisation_factor = (1 - p_max_minus_two);
} else {
normalisation_factor =
(1 - p_max_minus_two) *
((p_max_minus_one - p_max_minus_two) /
(p_max_minus_one -
p_max_minus_two * pi_max_minus_two / pi_max_minus_one));
}
double estimated_paths =
s_total /
(1 - normalisation_factor / (1 - normalisation_factor) *
p_max_minus_two / (1 - p_max_minus_two));
afl->lower_coverage_estimate = (double)s_total / estimated_paths;
}
}
} else /*n_rare = 0*/ {
afl->lower_coverage_estimate = 1;
afl->upper_coverage_estimate = 1;
}
}
}
if (afl->pizza_is_served) { if (afl->pizza_is_served) {
show_stats_pizza(afl); show_stats_pizza(afl);
@ -766,10 +977,20 @@ void show_stats_normal(afl_state_t *afl) {
if (unlikely(afl->term_too_small)) { if (unlikely(afl->term_too_small)) {
if (afl->coverage_estimation) {
SAYF(cBRI
"Your terminal is too small to display the UI.\n"
"Please resize terminal window to at least 79x26.\n" cRST);
} else {
SAYF(cBRI SAYF(cBRI
"Your terminal is too small to display the UI.\n" "Your terminal is too small to display the UI.\n"
"Please resize terminal window to at least 79x24.\n" cRST); "Please resize terminal window to at least 79x24.\n" cRST);
}
return; return;
} }
@ -1325,6 +1546,42 @@ void show_stats_normal(afl_state_t *afl) {
} }
if (afl->coverage_estimation) {
SAYF(SET_G1 "\n" bSTG bVR bH cCYA bSTOP
" code coverage information " bSTG bH20 bH2 bH2 bVL "\n");
if (afl->upper_coverage_estimate ||
afl->lower_coverage_estimate) /* If both are 0 they are not yet
calculated */
sprintf(tmp, "%6.2f%% - %6.2f%%", afl->lower_coverage_estimate * 100,
afl->upper_coverage_estimate * 100);
else
sprintf(tmp, "not yet calculated!");
SAYF(bV bSTOP " coverage : " cRST "%-27s" bSTG bV "\n", tmp);
sprintf(tmp, "%.2f%%", afl->n_fuzz_fill * 100);
SAYF(bV bSTOP " collision probability : ");
if (afl->n_fuzz_fill < 0.05) {
SAYF(cRST);
} else if (afl->n_fuzz_fill < 0.25) {
SAYF(bSTG);
} else if (afl->n_fuzz_fill < 0.5) {
SAYF(cYEL);
} else {
SAYF(cLRD);
}
SAYF("%-27s" bSTG bV, tmp);
}
/* Last line */ /* Last line */
SAYF(SET_G1 "\n" bSTG bLB bH cCYA bSTOP " strategy:" cPIN SAYF(SET_G1 "\n" bSTG bLB bH cCYA bSTOP " strategy:" cPIN
@ -2175,6 +2432,19 @@ void show_stats_pizza(afl_state_t *afl) {
} }
if (afl->coverage_estimation) {
SAYF(SET_G1 "\n" bSTG bVR bH cCYA bSTOP
" code coverage information " bSTG bH20 bH20 bH5 bH2 bVL "\n");
if (afl->upper_coverage_estimate && afl->lower_coverage_estimate)
sprintf(tmp, "%6.2f%% - %6.2f%%", afl->lower_coverage_estimate * 100,
afl->upper_coverage_estimate * 100);
else
sprintf(tmp, "oven not hot enough!");
SAYF(bV bSTOP " coverage : " cRST "%-63s" bSTG bV, tmp);
}
/* Last line */ /* Last line */
SAYF(SET_G1 "\n" bSTG bLB bH30 bH20 bH2 bH20 bH2 bH bRB bSTOP cRST RESET_G1); SAYF(SET_G1 "\n" bSTG bLB bH30 bH20 bH2 bH20 bH2 bH bRB bSTOP cRST RESET_G1);

View File

@ -157,8 +157,6 @@ static void usage(u8 *argv0, int more_help) {
" -Q - use binary-only instrumentation (QEMU mode)\n" " -Q - use binary-only instrumentation (QEMU mode)\n"
" -U - use unicorn-based instrumentation (Unicorn mode)\n" " -U - use unicorn-based instrumentation (Unicorn mode)\n"
" -W - use qemu-based instrumentation with Wine (Wine mode)\n" " -W - use qemu-based instrumentation with Wine (Wine mode)\n"
#endif
#if defined(__linux__)
" -X - use VM fuzzing (NYX mode - standalone mode)\n" " -X - use VM fuzzing (NYX mode - standalone mode)\n"
" -Y - use VM fuzzing (NYX mode - multiple instances mode)\n" " -Y - use VM fuzzing (NYX mode - multiple instances mode)\n"
#endif #endif
@ -251,11 +249,13 @@ static void usage(u8 *argv0, int more_help) {
" (must contain abort_on_error=1 and symbolize=0)\n" " (must contain abort_on_error=1 and symbolize=0)\n"
"MSAN_OPTIONS: custom settings for MSAN\n" "MSAN_OPTIONS: custom settings for MSAN\n"
" (must contain exitcode="STRINGIFY(MSAN_ERROR)" and symbolize=0)\n" " (must contain exitcode="STRINGIFY(MSAN_ERROR)" and symbolize=0)\n"
"AFL_ABUNDANT_CUT_OFF: cut of for code coverage estimatiors (default 10)\n"
"AFL_AUTORESUME: resume fuzzing if directory specified by -o already exists\n" "AFL_AUTORESUME: resume fuzzing if directory specified by -o already exists\n"
"AFL_BENCH_JUST_ONE: run the target just once\n" "AFL_BENCH_JUST_ONE: run the target just once\n"
"AFL_BENCH_UNTIL_CRASH: exit soon when the first crashing input has been found\n" "AFL_BENCH_UNTIL_CRASH: exit soon when the first crashing input has been found\n"
"AFL_CMPLOG_ONLY_NEW: do not run cmplog on initial testcases (good for resumes!)\n" "AFL_CMPLOG_ONLY_NEW: do not run cmplog on initial testcases (good for resumes!)\n"
"AFL_CRASH_EXITCODE: optional child exit code to be interpreted as crash\n" "AFL_CRASH_EXITCODE: optional child exit code to be interpreted as crash\n"
"AFL_CODE_COVERAGE: enable code coverage estimators\n"
"AFL_CUSTOM_MUTATOR_LIBRARY: lib with afl_custom_fuzz() to mutate inputs\n" "AFL_CUSTOM_MUTATOR_LIBRARY: lib with afl_custom_fuzz() to mutate inputs\n"
"AFL_CUSTOM_MUTATOR_ONLY: avoid AFL++'s internal mutators\n" "AFL_CUSTOM_MUTATOR_ONLY: avoid AFL++'s internal mutators\n"
"AFL_CYCLE_SCHEDULES: after completing a cycle, switch to a different -p schedule\n" "AFL_CYCLE_SCHEDULES: after completing a cycle, switch to a different -p schedule\n"
@ -1542,6 +1542,22 @@ int main(int argc, char **argv_orig, char **envp) {
} }
ACTF("Getting to work..."); ACTF("Getting to work...");
{
char *n_fuzz_size = get_afl_env("AFL_N_FUZZ_SIZE");
char *end = NULL;
if (n_fuzz_size == NULL ||
(afl->n_fuzz_size = strtoull(n_fuzz_size, &end, 0)) == 0) {
ACTF("Using default n_fuzz_size of 1 << 21");
afl->n_fuzz_size = (1 << 21);
}
if (get_afl_env("AFL_CRASH_ON_HASH_COLLISION"))
afl->crash_on_hash_collision = 1;
}
switch (afl->schedule) { switch (afl->schedule) {
@ -1583,7 +1599,24 @@ int main(int argc, char **argv_orig, char **envp) {
/* Dynamically allocate memory for AFLFast schedules */ /* Dynamically allocate memory for AFLFast schedules */
if (afl->schedule >= FAST && afl->schedule <= RARE) { if (afl->schedule >= FAST && afl->schedule <= RARE) {
afl->n_fuzz = ck_alloc(N_FUZZ_SIZE * sizeof(u32)); afl->n_fuzz = ck_alloc((afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32)) +
1) *
sizeof(u32 *));
if (afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)))
afl->n_fuzz[afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32))] =
ck_alloc(afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)));
for (u32 i = 0; i < afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32));
i++) {
;
afl->n_fuzz[i] = ck_alloc(MAX_ALLOC);
}
} }
@ -1592,6 +1625,70 @@ int main(int argc, char **argv_orig, char **envp) {
if (get_afl_env("AFL_NO_ARITH")) { afl->no_arith = 1; } if (get_afl_env("AFL_NO_ARITH")) { afl->no_arith = 1; }
if (get_afl_env("AFL_SHUFFLE_QUEUE")) { afl->shuffle_queue = 1; } if (get_afl_env("AFL_SHUFFLE_QUEUE")) { afl->shuffle_queue = 1; }
if (get_afl_env("AFL_EXPAND_HAVOC_NOW")) { afl->expand_havoc = 1; } if (get_afl_env("AFL_EXPAND_HAVOC_NOW")) { afl->expand_havoc = 1; }
if (get_afl_env("AFL_CODE_COVERAGE")) {
afl->coverage_estimation = 1;
char *cut_off = get_afl_env("AFL_ABUNDANT_CUT_OFF");
if (cut_off == NULL || (afl->abundant_cut_off = atoi(cut_off)) <= 0) {
WARNF(
"Code Coverage is set but AFL_ABUNDANT_CUT_OFF is not valid default "
"10 is selected");
afl->abundant_cut_off = 10;
};
afl->path_frequenzy = ck_alloc((afl->abundant_cut_off) * sizeof(u32));
if (afl->n_fuzz == NULL) {
afl->n_fuzz = ck_alloc((afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32)) +
1) *
sizeof(u32 *));
if (afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)))
afl->n_fuzz[afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32))] =
ck_alloc(afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)));
for (u32 i = 0; i < afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32));
i++) {
afl->n_fuzz[i] = ck_alloc(MAX_ALLOC);
}
}
}
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
afl->n_fuzz_logged = ck_alloc((afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32)) +
1) *
sizeof(u32 *));
if (afl->n_fuzz_size * sizeof(u32) % (MAX_ALLOC / sizeof(u32) * sizeof(u32)))
afl->n_fuzz_logged[afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32))] =
ck_alloc(afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)));
for (u32 i = 0; i < afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32));
i++) {
;
afl->n_fuzz_logged[i] = ck_alloc(MAX_ALLOC);
}
#endif
if (get_afl_env("AFL_ABUNDANT_CUT_OFF") && !afl->coverage_estimation) {
FATAL("AFL_ABUNDANT_CUT_OFF needs AFL_CODE_COVERAGE set!");
}
if (afl->afl_env.afl_autoresume) { if (afl->afl_env.afl_autoresume) {