Compare commits

...

8 Commits

Author SHA1 Message Date
29afb1fba5 fixes 2023-08-13 10:33:50 +02:00
982d424eaa Merge pull request #1830 from Jonas-Eppard/coverage_estimation
Adding Coverage Estimation using Species Richness Estimators
2023-08-10 12:05:09 +00:00
29738d8f9c Updated docs to reflect changes
Signed-off-by: jonas <jonas.eppard@de.bosch.com>
2023-08-10 13:24:55 +02:00
3e79578c26 Changed Display
Signed-off-by: jonas <jonas.eppard@de.bosch.com>
2023-08-10 13:18:12 +02:00
10b2d21c29 Fixed path prediction
Signed-off-by: jonas <jonas.eppard@de.bosch.com>
2023-08-10 13:17:12 +02:00
dfae68b8d7 Implemented requested changes
Signed-off-by: jonas <jonas.eppard@de.bosch.com>
2023-08-10 09:46:05 +02:00
34815d414a formatted
Signed-off-by: jonas <jonas.eppard@de.bosch.com>
2023-08-10 08:16:03 +02:00
b54d3ed414 Added Coverage Estimation
Signed-off-by: jonas <jonas.eppard@de.bosch.com>
2023-08-10 08:15:57 +02:00
13 changed files with 776 additions and 76 deletions

View File

@ -0,0 +1,42 @@
# Coverage estimation in AFL++
This file describes the coverage estimation of AFL++. For general information about AFL++, see
[README.md](../README.md).
## Table of Content
* [Introduction](#1-introduction)
* [Setup](#2-setup)
* [Status Screen extension](#3-status-screen-extension)
## 1 Introduction
The coverage estimation inside AFL++ is based on path coverage. It used STADS (Security Testing As Discovery of Species) to use species richness estimators for coverage estimation.
The estimated coverage should help developers when to stop a fuzzing campaign.
The coverage estimation can only be estimated over fuzzable/reachable paths.
Coverage estimation is not tested on multiple fuzzing instances (-M/-S Options). It's also not tested on resuming a fuzz run (AFL_AUTORESUME, -i -).
## 2 Setup
To use coverage estimation you don't have to change your workflow, just add following environment variables:
* Set `AFL_CODE_COVERAGE` during fuzzing (afl-fuzz) to enable coverage estimation.
* Consider Setting `AFL_N_FUZZ_SIZE` to something bigger then (1 << 21)(default) to mitigate (Re-)Hash collisions
* Consider the use of `AFL_CRASH_ON_HASH_COLLISION` if (slightly) incorrect coverage estimation is worse then a abort
* If the coverage estimation should update more often change `COVERAGE_INTERVAL` in [config.h](../config.h) (This requires rebuilding of AFL++)
More information's about these environment variables in [env_variables.md](./env_variables.md).
## 3 Status Screen extension
The status screen will be extended with following box:
```
+- code coverage information ------------------------+
| coverage : 57.12% - 63.21% |
| collision probability : 1.02% |
+----------------------------------------------------+
```
* coverage - This is the estimated path coverage. The first number is a lower bound estimate.
The second number is a upper bound estimate. It's only possible to estimate the fuzzable/reachable paths.
If the coverage is very fast very high you either fuzzing a simple target or don't have a good corpus.
Unfortunately there is not enough research done to prove accuracy and precision of these estimators, but they hold in asymptotic behavior and the accuracy and precision of the estimators, go down with more hash collisions (see collision probability).
* collision propability - This is a estimate for the probability of hash collisions. If this number gets higher than 25% you should consider increasing `AFL_N_FUZZ_SIZE`. The coverage estimation will lose accuracy (overestimates) when hash collisions occur. You can also chose to ignore this, but then the coverage estimation will suffer.
If `AFL_CRASH_ON_HASH_COLLISION` is set afl-fuzz will abort on a detected Hash collision.
This box takes the place of the fuzzing strategy yields, since there are not used when non-deterministic mode is used. Therefore the coverage estimation is not compatible with the deterministic mode (-D).

View File

@ -327,6 +327,20 @@ checks or alter some of the more exotic semantics of the tool:
(`-i in`). This is an important feature to set when resuming a fuzzing (`-i in`). This is an important feature to set when resuming a fuzzing
session. session.
- `AFL_CODE_COVERAGE` will enable code coverage estimation. See also
[coverage_estimation.md](./coverage_estimation.md). Related enviroment
variables: `AFL_ABUNDANT_CUT_OFF`, `AFL_CRASH_ON_HASH_COLLISION` and
`AFL_N_FUZZ_SIZE`
- Coverage estimation only: `AFL_ABUNDANT_CUT_OFF` describes the cut-off for
species richness estimators. Default is 10. The Value should only be changed
for research.
- Coverage estimation only: `AFL_CRASH_ON_HASH_COLLISION` will crash on detected
(Re)Hash collisions. If this is not set only a warning is displayed instead of
Aborting the Fuzzing campaign. You could also increase `AFL_N_FUZZ_SIZE` to
mitigate the chance of a (Re)Hash collision.
- Setting `AFL_CRASH_EXITCODE` sets the exit code AFL++ treats as crash. For - Setting `AFL_CRASH_EXITCODE` sets the exit code AFL++ treats as crash. For
example, if `AFL_CRASH_EXITCODE='-1'` is set, each input resulting in a `-1` example, if `AFL_CRASH_EXITCODE='-1'` is set, each input resulting in a `-1`
return code (i.e. `exit(-1)` got called), will be treated as if a crash had return code (i.e. `exit(-1)` got called), will be treated as if a crash had
@ -450,6 +464,10 @@ checks or alter some of the more exotic semantics of the tool:
there is a 1 in 201 chance, that one of the dictionary entries will not be there is a 1 in 201 chance, that one of the dictionary entries will not be
used directly. used directly.
- Coverage estimation only: `AFL_N_FUZZ_SIZE` sets the amount of Path Hashes that
can be stored. Default (1 << 21). Think about increasing this value to mitigate
(Re)Hash collisions. Upper Bound is (1 << 64 - 1).
- Setting `AFL_NO_AFFINITY` disables attempts to bind to a specific CPU core - Setting `AFL_NO_AFFINITY` disables attempts to bind to a specific CPU core
on Linux systems. This slows things down, but lets you run more instances of on Linux systems. This slows things down, but lets you run more instances of
afl-fuzz than would be prudent (if you really want to). afl-fuzz than would be prudent (if you really want to).

View File

@ -130,6 +130,9 @@
// Little helper to access the ptr to afl->##name_buf - for use in afl_realloc. // Little helper to access the ptr to afl->##name_buf - for use in afl_realloc.
#define AFL_BUF_PARAM(name) ((void **)&afl->name##_buf) #define AFL_BUF_PARAM(name) ((void **)&afl->name##_buf)
#define LARGE_INDEX(array, index, size, item_size) \
array[(u64)((index) / (size / item_size))][(index) % (size / item_size)]
#ifdef WORD_SIZE_64 #ifdef WORD_SIZE_64
#define AFL_RAND_RETURN u64 #define AFL_RAND_RETURN u64
#else #else
@ -540,7 +543,9 @@ typedef struct afl_state {
expand_havoc, /* perform expensive havoc after no find */ expand_havoc, /* perform expensive havoc after no find */
cycle_schedules, /* cycle power schedules? */ cycle_schedules, /* cycle power schedules? */
old_seed_selection, /* use vanilla afl seed selection */ old_seed_selection, /* use vanilla afl seed selection */
reinit_table; /* reinit the queue weight table */ reinit_table, /* reinit the queue weight table */
coverage_estimation, /* Code Coverage Estimation Mode? */
crash_on_hash_collision; /* Ignore Hash collisions */
u8 *virgin_bits, /* Regions yet untouched by fuzzing */ u8 *virgin_bits, /* Regions yet untouched by fuzzing */
*virgin_tmout, /* Bits we haven't seen in tmouts */ *virgin_tmout, /* Bits we haven't seen in tmouts */
@ -552,8 +557,28 @@ typedef struct afl_state {
u8 *var_bytes; /* Bytes that appear to be variable */ u8 *var_bytes; /* Bytes that appear to be variable */
#define N_FUZZ_SIZE (1 << 21) u64 n_fuzz_size;
u32 *n_fuzz; u32 **n_fuzz;
double n_fuzz_fill; /* Fill level of n_fuzz (0-1) */
u32 *path_frequenzy; /* Frequenzies of Paths for Coverage Estimation */
u8 abundant_cut_off; /* Cut-off Value for estimators */
u64 max_path_number, /* Number of times most viewed path(s) are viewed */
max_path_count, /* Count of paths wich are most viewed */
second_max_path_number,
second_max_path_count, /* Count of paths wich are 2nd most viewed */
abundant_paths; /* Number of abundant code paths */
u32 coverage_counter, /* Counter when to calculate coverage */
num_detected_collisions; /* Number of detected Collisions */
double upper_coverage_estimate, /* Coverage Estimation lower in % */
lower_coverage_estimate; /* Coverage Estimation upper in % */
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
FILE *coverage_log_file; /* File to log coverage */
u64 total_paths, /* Total Paths saved for logging */
next_save_time; /* Time to save Paths again */
u32 **n_fuzz_logged;
#endif
volatile u8 stop_soon, /* Ctrl-C pressed? */ volatile u8 stop_soon, /* Ctrl-C pressed? */
clear_screen; /* Window resized? */ clear_screen; /* Window resized? */
@ -1138,9 +1163,12 @@ void destroy_extras(afl_state_t *);
/* Stats */ /* Stats */
void load_stats_file(afl_state_t *); void load_stats_file(afl_state_t *);
void write_setup_file(afl_state_t *, u32, char **); void write_setup_file(afl_state_t *, u32, char **);
void write_stats_file(afl_state_t *, u32, double, double, double); void write_stats_file(afl_state_t *, u32, double, double, double);
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
void write_coverage_file(afl_state_t *);
#endif
void maybe_update_plot_file(afl_state_t *, u32, double, double); void maybe_update_plot_file(afl_state_t *, u32, double, double);
void write_queue_stats(afl_state_t *); void write_queue_stats(afl_state_t *);
void show_stats(afl_state_t *); void show_stats(afl_state_t *);

View File

@ -298,6 +298,15 @@
#define PLOT_UPDATE_SEC 5 #define PLOT_UPDATE_SEC 5
#define QUEUE_UPDATE_SEC 1800 #define QUEUE_UPDATE_SEC 1800
/* Max Interval for Coverage Estimation in UI Updates - UI Updates is set by
* UI_TARGET_HZ */
#define COVERAGE_INTERVAL 10 /* Roughly every 2 seconds */
/* Write File to log Data for coverage estimation */
#define COVERAGE_ESTIMATION_LOGGING 1
/* Smoothing divisor for CPU load and exec speed stats (1 - no smoothing). */ /* Smoothing divisor for CPU load and exec speed stats (1 - no smoothing). */
#define AVG_SMOOTHING 16 #define AVG_SMOOTHING 16

View File

@ -16,6 +16,7 @@ static char *afl_environment_deprecated[] = {
static char *afl_environment_variables[] = { static char *afl_environment_variables[] = {
"AFL_ABUNDANT_CUT_OFF",
"AFL_ALIGNED_ALLOC", "AFL_ALIGNED_ALLOC",
"AFL_ALLOW_TMP", "AFL_ALLOW_TMP",
"AFL_ANALYZE_HEX", "AFL_ANALYZE_HEX",
@ -30,11 +31,13 @@ static char *afl_environment_variables[] = {
"AFL_CMIN_ALLOW_ANY", "AFL_CMIN_ALLOW_ANY",
"AFL_CMIN_CRASHES_ONLY", "AFL_CMIN_CRASHES_ONLY",
"AFL_CMPLOG_ONLY_NEW", "AFL_CMPLOG_ONLY_NEW",
"AFL_CODE_COVERAGE",
"AFL_CODE_END", "AFL_CODE_END",
"AFL_CODE_START", "AFL_CODE_START",
"AFL_COMPCOV_BINNAME", "AFL_COMPCOV_BINNAME",
"AFL_COMPCOV_LEVEL", "AFL_COMPCOV_LEVEL",
"AFL_CRASH_EXITCODE", "AFL_CRASH_EXITCODE",
"AFL_CRASH_ON_HASH_COLLISION",
"AFL_CRASHING_SEEDS_AS_NEW_CRASH", "AFL_CRASHING_SEEDS_AS_NEW_CRASH",
"AFL_CUSTOM_MUTATOR_LIBRARY", "AFL_CUSTOM_MUTATOR_LIBRARY",
"AFL_CUSTOM_MUTATOR_ONLY", "AFL_CUSTOM_MUTATOR_ONLY",
@ -169,6 +172,7 @@ static char *afl_environment_variables[] = {
"AFL_LLVM_LTO_DONTWRITEID", "AFL_LLVM_LTO_DONTWRITEID",
"AFL_LLVM_LTO_SKIPINIT" "AFL_LLVM_LTO_SKIPINIT"
"AFL_LLVM_LTO_STARTID", "AFL_LLVM_LTO_STARTID",
"AFL_N_FUZZ_SIZE",
"AFL_NO_ARITH", "AFL_NO_ARITH",
"AFL_NO_AUTODICT", "AFL_NO_AUTODICT",
"AFL_NO_BUILTIN", "AFL_NO_BUILTIN",

View File

@ -1,3 +1,4 @@
#include <features.h> #include <features.h>
#ifndef __GLIBC__ #ifndef __GLIBC__

View File

@ -474,7 +474,8 @@ save_if_interesting(afl_state_t *afl, void *mem, u32 len, u8 fault) {
/* Generating a hash on every input is super expensive. Bad idea and should /* Generating a hash on every input is super expensive. Bad idea and should
only be used for special schedules */ only be used for special schedules */
if (unlikely(afl->schedule >= FAST && afl->schedule <= RARE)) { if (likely((afl->schedule >= FAST && afl->schedule <= RARE)) ||
unlikely(afl->coverage_estimation)) {
classify_counts(&afl->fsrv); classify_counts(&afl->fsrv);
classified = 1; classified = 1;
@ -483,8 +484,91 @@ save_if_interesting(afl_state_t *afl, void *mem, u32 len, u8 fault) {
cksum = hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST); cksum = hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST);
/* Saturated increment */ /* Saturated increment */
if (likely(afl->n_fuzz[cksum % N_FUZZ_SIZE] < 0xFFFFFFFF)) if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
afl->n_fuzz[cksum % N_FUZZ_SIZE]++; sizeof(u32)) < 0xFFFFFFFF)
++LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32));
if (unlikely(afl->coverage_estimation)) {
if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) <= afl->abundant_cut_off + 1U) {
if ((LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) != 1) &&
likely(afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) -
2] > 0))
--afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) -
2];
if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) <= afl->abundant_cut_off) {
if (likely(afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) -
1] < UINT32_MAX)) {
++afl->path_frequenzy[LARGE_INDEX(afl->n_fuzz,
cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) -
1];
}
} else {
++afl->abundant_paths;
}
}
if (unlikely(afl->max_path_number == 0)) {
afl->max_path_number = 1;
afl->max_path_count = 1;
} else if (unlikely(LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size,
MAX_ALLOC, sizeof(u32)) >=
afl->second_max_path_number)) {
if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) == afl->second_max_path_number)
++afl->second_max_path_count;
else if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) == afl->max_path_number)
++afl->max_path_count;
else if (LARGE_INDEX(afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC,
sizeof(u32)) > afl->max_path_number) {
if (afl->max_path_count > 1) {
afl->second_max_path_count = afl->max_path_count - 1;
afl->second_max_path_number = afl->max_path_number;
}
afl->max_path_number = LARGE_INDEX(
afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC, sizeof(u32));
afl->max_path_count = 1;
} else /* second max < n_fuzz < max*/ {
afl->second_max_path_count = 1;
afl->second_max_path_number = LARGE_INDEX(
afl->n_fuzz, cksum % afl->n_fuzz_size, MAX_ALLOC, sizeof(u32));
}
}
}
} }
@ -594,11 +678,43 @@ save_if_interesting(afl_state_t *afl, void *mem, u32 len, u8 fault) {
/* For AFLFast schedules we update the new queue entry */ /* For AFLFast schedules we update the new queue entry */
if (likely(cksum)) { if (likely(cksum)) {
afl->queue_top->n_fuzz_entry = cksum % N_FUZZ_SIZE; afl->queue_top->n_fuzz_entry = cksum % afl->n_fuzz_size;
afl->n_fuzz[afl->queue_top->n_fuzz_entry] = 1; if (unlikely(afl->coverage_estimation)) {
if (unlikely(LARGE_INDEX(afl->n_fuzz, afl->queue_top->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)) > 1)) {
if (afl->crash_on_hash_collision)
FATAL(
"Hash collision on n_fuzz increase AFL_N_FUZZ_SIZE or ignore "
"with removing AFL_CRASH_ON_HASH_COLLISION");
else
WARNF("Hash collision on n_fuzz increase AFL_N_FUZZ_SIZE! (%lu)",
(unsigned long)++afl->num_detected_collisions);
if (LARGE_INDEX(afl->n_fuzz, afl->queue_top->n_fuzz_entry, MAX_ALLOC,
sizeof(u32)) == 0) {
if (likely(afl->path_frequenzy[0] < UINT32_MAX)) {
++afl->path_frequenzy[0];
}
}
}
}
LARGE_INDEX(afl->n_fuzz, afl->queue_top->n_fuzz_entry, MAX_ALLOC,
sizeof(u32)) = 1;
} }
/* due to classify counts we have to recalculate the checksum */
afl->queue_top->exec_cksum =
hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST);
/* Try to calibrate inline; this also calls update_bitmap_score() when /* Try to calibrate inline; this also calls update_bitmap_score() when
successful. */ successful. */
res = calibrate_case(afl, afl->queue_top, mem, afl->queue_cycle - 1, 0); res = calibrate_case(afl, afl->queue_top, mem, afl->queue_cycle - 1, 0);

View File

@ -31,6 +31,37 @@
#ifdef HAVE_AFFINITY #ifdef HAVE_AFFINITY
/* A helper function for handle_existing_out_dir(), deleting all prefixed
files in a directory. */
static u8 delete_files(u8 *path, u8 *prefix) {
DIR *d;
struct dirent *d_ent;
d = opendir(path);
if (!d) { return 0; }
while ((d_ent = readdir(d))) {
if (d_ent->d_name[0] != '.' &&
(!prefix || !strncmp(d_ent->d_name, prefix, strlen(prefix)))) {
u8 *fname = alloc_printf("%s/%s", path, d_ent->d_name);
if (unlink(fname)) { PFATAL("Unable to delete '%s'", fname); }
ck_free(fname);
}
}
closedir(d);
return !!rmdir(path);
}
/* bind process to a specific cpu. Returns 0 on failure. */ /* bind process to a specific cpu. Returns 0 on failure. */
static u8 bind_cpu(afl_state_t *afl, s32 cpuid) { static u8 bind_cpu(afl_state_t *afl, s32 cpuid) {
@ -1471,7 +1502,17 @@ void pivot_inputs(afl_state_t *afl) {
} }
if (afl->in_place_resume) { nuke_resume_dir(afl); } if (afl->in_place_resume) {
nuke_resume_dir(afl);
} else {
u8 *fn = alloc_printf("%s/path_data", afl->out_dir);
(void)delete_files(fn, NULL);
ck_free(fn);
}
} }
@ -1560,37 +1601,6 @@ void find_timeout(afl_state_t *afl) {
} }
/* A helper function for handle_existing_out_dir(), deleting all prefixed
files in a directory. */
static u8 delete_files(u8 *path, u8 *prefix) {
DIR *d;
struct dirent *d_ent;
d = opendir(path);
if (!d) { return 0; }
while ((d_ent = readdir(d))) {
if (d_ent->d_name[0] != '.' &&
(!prefix || !strncmp(d_ent->d_name, prefix, strlen(prefix)))) {
u8 *fname = alloc_printf("%s/%s", path, d_ent->d_name);
if (unlink(fname)) { PFATAL("Unable to delete '%s'", fname); }
ck_free(fname);
}
}
closedir(d);
return !!rmdir(path);
}
/* Get the number of runnable processes, with some simple smoothing. */ /* Get the number of runnable processes, with some simple smoothing. */
double get_runnable_processes(void) { double get_runnable_processes(void) {
@ -1678,6 +1688,10 @@ void nuke_resume_dir(afl_state_t *afl) {
if (delete_files(fn, CASE_PREFIX)) { goto dir_cleanup_failed; } if (delete_files(fn, CASE_PREFIX)) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
fn = alloc_printf("%s/path_data", afl->out_dir);
(void)delete_files(fn, NULL);
ck_free(fn);
return; return;
dir_cleanup_failed: dir_cleanup_failed:
@ -1841,6 +1855,17 @@ static void handle_existing_out_dir(afl_state_t *afl) {
if (delete_files(fn, CASE_PREFIX)) { goto dir_cleanup_failed; } if (delete_files(fn, CASE_PREFIX)) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
if (unlikely(afl->coverage_estimation)) {
fn = alloc_printf("%s/path_data", afl->out_dir);
(void)delete_files(fn, NULL);
ck_free(fn);
}
#endif
/* All right, let's do <afl->out_dir>/crashes/id:* and /* All right, let's do <afl->out_dir>/crashes/id:* and
* <afl->out_dir>/hangs/id:*. */ * <afl->out_dir>/hangs/id:*. */
@ -1974,6 +1999,14 @@ static void handle_existing_out_dir(afl_state_t *afl) {
if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; } if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
if (unlikely(afl->coverage_estimation)) {
fn = alloc_printf("%s/coverage_estimation", afl->out_dir);
if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; }
ck_free(fn);
}
fn = alloc_printf("%s/cmdline", afl->out_dir); fn = alloc_printf("%s/cmdline", afl->out_dir);
if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; } if (unlink(fn) && errno != ENOENT) { goto dir_cleanup_failed; }
ck_free(fn); ck_free(fn);
@ -2182,6 +2215,38 @@ void setup_dirs_fds(afl_state_t *afl) {
"pending_total, pending_favs, map_size, saved_crashes, " "pending_total, pending_favs, map_size, saved_crashes, "
"saved_hangs, max_depth, execs_per_sec, total_execs, edges_found\n"); "saved_hangs, max_depth, execs_per_sec, total_execs, edges_found\n");
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
if (unlikely(afl->coverage_estimation)) {
tmp = alloc_printf("%s/path_data", afl->out_dir);
if (mkdir(tmp, 0700)) {
if (errno != EEXIST)
PFATAL("Unable to create '%s'", tmp);
else {}
}
ck_free(tmp);
tmp = alloc_printf("%s/coverage_estimation", afl->out_dir);
fd = open(tmp, O_WRONLY | O_CREAT | O_EXCL, DEFAULT_PERMISSION);
if (fd < 0) { PFATAL("Unable to create '%s'", tmp); }
ck_free(tmp);
afl->coverage_log_file = fdopen(fd, "w");
if (!afl->coverage_log_file) { PFATAL("fdopen() failed"); }
fprintf(afl->coverage_log_file,
"# relative_time, total_paths, abundant_paths, lower_estimate, "
"higher_estimate, max_path_number, max_path_count, "
"second_max_path_number, "
"second_max_path_count, path_frequenzies...\n");
fflush(afl->coverage_log_file);
}
#endif
} else { } else {
int fd = open(tmp, O_WRONLY | O_CREAT, DEFAULT_PERMISSION); int fd = open(tmp, O_WRONLY | O_CREAT, DEFAULT_PERMISSION);

View File

@ -415,7 +415,10 @@ u8 fuzz_one_original(afl_state_t *afl) {
afl->queue_cur->perf_score, afl->queue_cur->weight, afl->queue_cur->perf_score, afl->queue_cur->weight,
afl->queue_cur->favored, afl->queue_cur->was_fuzzed, afl->queue_cur->favored, afl->queue_cur->was_fuzzed,
afl->queue_cur->exec_us, afl->queue_cur->exec_us,
likely(afl->n_fuzz) ? afl->n_fuzz[afl->queue_cur->n_fuzz_entry] : 0, likely(afl->n_fuzz)
? LARGE_INDEX(afl->n_fuzz, afl->queue_cur->n_fuzz_entry, MAX_ALLOC,
sizeof(u32))
: 0,
afl->queue_cur->bitmap_size, afl->queue_cur->is_ascii, time_tmp); afl->queue_cur->bitmap_size, afl->queue_cur->is_ascii, time_tmp);
fflush(stdout); fflush(stdout);

View File

@ -68,7 +68,8 @@ double compute_weight(afl_state_t *afl, struct queue_entry *q,
if (likely(afl->schedule >= FAST && afl->schedule <= RARE)) { if (likely(afl->schedule >= FAST && afl->schedule <= RARE)) {
u32 hits = afl->n_fuzz[q->n_fuzz_entry]; u32 hits =
LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32));
if (likely(hits)) { weight /= (log10(hits) + 1); } if (likely(hits)) { weight /= (log10(hits) + 1); }
} }
@ -704,7 +705,8 @@ void update_bitmap_score(afl_state_t *afl, struct queue_entry *q) {
if (unlikely(afl->schedule >= FAST && afl->schedule < RARE)) if (unlikely(afl->schedule >= FAST && afl->schedule < RARE))
fuzz_p2 = 0; // Skip the fuzz_p2 comparison fuzz_p2 = 0; // Skip the fuzz_p2 comparison
else if (unlikely(afl->schedule == RARE)) else if (unlikely(afl->schedule == RARE))
fuzz_p2 = next_pow2(afl->n_fuzz[q->n_fuzz_entry]); fuzz_p2 = next_pow2(
LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32)));
else else
fuzz_p2 = q->fuzz_level; fuzz_p2 = q->fuzz_level;
@ -730,8 +732,9 @@ void update_bitmap_score(afl_state_t *afl, struct queue_entry *q) {
u64 top_rated_fav_factor; u64 top_rated_fav_factor;
u64 top_rated_fuzz_p2; u64 top_rated_fuzz_p2;
if (unlikely(afl->schedule >= FAST && afl->schedule <= RARE)) if (unlikely(afl->schedule >= FAST && afl->schedule <= RARE))
top_rated_fuzz_p2 = top_rated_fuzz_p2 = next_pow2(
next_pow2(afl->n_fuzz[afl->top_rated[i]->n_fuzz_entry]); LARGE_INDEX(afl->n_fuzz, afl->top_rated[i]->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)));
else else
top_rated_fuzz_p2 = afl->top_rated[i]->fuzz_level; top_rated_fuzz_p2 = afl->top_rated[i]->fuzz_level;
@ -1032,7 +1035,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
if (likely(!afl->queue_buf[i]->disabled)) { if (likely(!afl->queue_buf[i]->disabled)) {
fuzz_mu += log2(afl->n_fuzz[afl->queue_buf[i]->n_fuzz_entry]); fuzz_mu +=
log2(LARGE_INDEX(afl->n_fuzz, afl->queue_buf[i]->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)));
n_items++; n_items++;
} }
@ -1043,7 +1048,8 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
fuzz_mu = fuzz_mu / n_items; fuzz_mu = fuzz_mu / n_items;
if (log2(afl->n_fuzz[q->n_fuzz_entry]) > fuzz_mu) { if (log2(LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC,
sizeof(u32)) > fuzz_mu)) {
/* Never skip favourites */ /* Never skip favourites */
if (!q->favored) factor = 0; if (!q->favored) factor = 0;
@ -1058,7 +1064,8 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
// Don't modify unfuzzed seeds // Don't modify unfuzzed seeds
if (!q->fuzz_level) break; if (!q->fuzz_level) break;
switch ((u32)log2(afl->n_fuzz[q->n_fuzz_entry])) { switch ((u32)log2(
LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32)))) {
case 0 ... 1: case 0 ... 1:
factor = 4; factor = 4;
@ -1097,7 +1104,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
// Don't modify perf_score for unfuzzed seeds // Don't modify perf_score for unfuzzed seeds
if (!q->fuzz_level) break; if (!q->fuzz_level) break;
factor = q->fuzz_level / (afl->n_fuzz[q->n_fuzz_entry] + 1); factor = q->fuzz_level / (LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)) +
1);
break; break;
case QUAD: case QUAD:
@ -1105,7 +1114,9 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
if (!q->fuzz_level) break; if (!q->fuzz_level) break;
factor = factor =
q->fuzz_level * q->fuzz_level / (afl->n_fuzz[q->n_fuzz_entry] + 1); q->fuzz_level * q->fuzz_level /
(LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry, MAX_ALLOC, sizeof(u32)) +
1);
break; break;
case MMOPT: case MMOPT:
@ -1130,8 +1141,10 @@ u32 calculate_score(afl_state_t *afl, struct queue_entry *q) {
perf_score += (q->tc_ref * 10); perf_score += (q->tc_ref * 10);
// the more often fuzz result paths are equal to this queue entry, // the more often fuzz result paths are equal to this queue entry,
// reduce its value // reduce its value
perf_score *= (1 - (double)((double)afl->n_fuzz[q->n_fuzz_entry] / perf_score *=
(double)afl->fsrv.total_execs)); (1 - (double)((double)LARGE_INDEX(afl->n_fuzz, q->n_fuzz_entry,
MAX_ALLOC, sizeof(u32)) /
(double)afl->fsrv.total_execs));
break; break;

View File

@ -993,7 +993,6 @@ u8 trim_case(afl_state_t *afl, struct queue_entry *q, u8 *in_buf) {
} }
/* Since this can be slow, update the screen every now and then. */ /* Since this can be slow, update the screen every now and then. */
if (!(trim_exec++ % afl->stats_update_freq)) { show_stats(afl); } if (!(trim_exec++ % afl->stats_update_freq)) { show_stats(afl); }
++afl->stage_cur; ++afl->stage_cur;

View File

@ -286,7 +286,6 @@ void write_stats_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
#ifndef __HAIKU__ #ifndef __HAIKU__
if (getrusage(RUSAGE_CHILDREN, &rus)) { rus.ru_maxrss = 0; } if (getrusage(RUSAGE_CHILDREN, &rus)) { rus.ru_maxrss = 0; }
#endif #endif
fprintf( fprintf(
f, f,
"start_time : %llu\n" "start_time : %llu\n"
@ -328,6 +327,7 @@ void write_stats_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
"testcache_size : %llu\n" "testcache_size : %llu\n"
"testcache_count : %u\n" "testcache_count : %u\n"
"testcache_evict : %u\n" "testcache_evict : %u\n"
"hash_collisions : %lu\n"
"afl_banner : %s\n" "afl_banner : %s\n"
"afl_version : " VERSION "afl_version : " VERSION
"\n" "\n"
@ -368,7 +368,8 @@ void write_stats_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
#endif #endif
t_bytes, afl->fsrv.real_map_size, afl->var_byte_count, afl->expand_havoc, t_bytes, afl->fsrv.real_map_size, afl->var_byte_count, afl->expand_havoc,
afl->a_extras_cnt, afl->q_testcase_cache_size, afl->a_extras_cnt, afl->q_testcase_cache_size,
afl->q_testcase_cache_count, afl->q_testcase_evictions, afl->use_banner, afl->q_testcase_cache_count, afl->q_testcase_evictions,
(unsigned long)afl->num_detected_collisions, afl->use_banner,
afl->unicorn_mode ? "unicorn" : "", afl->fsrv.qemu_mode ? "qemu " : "", afl->unicorn_mode ? "unicorn" : "", afl->fsrv.qemu_mode ? "qemu " : "",
afl->fsrv.cs_mode ? "coresight" : "", afl->fsrv.cs_mode ? "coresight" : "",
afl->non_instrumented_mode ? " non_instrumented " : "", afl->non_instrumented_mode ? " non_instrumented " : "",
@ -448,6 +449,62 @@ void write_queue_stats(afl_state_t *afl) {
#endif #endif
/* Write coverage file */
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
void write_coverage_file(afl_state_t *afl) {
char *tmp = alloc_printf("%s/path_data/time:%llu", afl->out_dir,
(unsigned long long)afl->next_save_time / 1000);
s32 fd = open(tmp, O_WRONLY | O_CREAT | O_EXCL, DEFAULT_PERMISSION);
if (unlikely(fd < 0)) { PFATAL("Unable to create '%s'", tmp); }
FILE *current_file = fdopen(fd, "w");
// Write file header
fprintf(current_file, "# path hash, number of times path is fuzzed\n");
for (u64 i = 0; i < afl->n_fuzz_size; i++) {
if (LARGE_INDEX(afl->n_fuzz, i, MAX_ALLOC, sizeof(u32)) !=
LARGE_INDEX(afl->n_fuzz_logged, i, MAX_ALLOC, sizeof(u32))) {
fprintf(
current_file, "%llu,%lu\n", (unsigned long long)i,
(unsigned long)LARGE_INDEX(afl->n_fuzz, i, MAX_ALLOC, sizeof(u32)));
LARGE_INDEX(afl->n_fuzz_logged, i, MAX_ALLOC, sizeof(u32)) =
LARGE_INDEX(afl->n_fuzz, i, MAX_ALLOC, sizeof(u32));
}
}
fflush(current_file);
fclose(current_file);
if (afl->next_save_time < 1000 * 60 * 15) {
// Save every 1 min
afl->next_save_time += 1000 * 60;
} else if (afl->next_save_time < 1000 * 60 * 60 * 6 /* 6h */) {
// Save every 15 min
afl->next_save_time += 1000 * 60 * 15;
} else if (afl->next_save_time < 1000 * 60 * 60 * 24 * 2 /* 2d */) {
// Save every 6h
afl->next_save_time += 1000 * 60 * 60 * 6;
} else {
// Save every 12h
afl->next_save_time += 1000 * 60 * 60 * 12;
}
return;
}
#endif
/* Update the plot file if there is a reason to. */ /* Update the plot file if there is a reason to. */
void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg, void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
@ -483,10 +540,9 @@ void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
/* Fields in the file: /* Fields in the file:
relative_time, afl->cycles_done, cur_item, corpus_count, corpus_not_fuzzed, relative_time, afl->cycles_done, cur_item, corpus_count,
favored_not_fuzzed, saved_crashes, saved_hangs, max_depth, corpus_not_fuzzed, favored_not_fuzzed, saved_crashes, saved_hangs,
execs_per_sec, edges_found */ max_depth, execs_per_sec, edges_found */
fprintf(afl->fsrv.plot_file, fprintf(afl->fsrv.plot_file,
"%llu, %llu, %u, %u, %u, %u, %0.02f%%, %llu, %llu, %u, %0.02f, %llu, " "%llu, %llu, %u, %u, %u, %u, %0.02f%%, %llu, %llu, %u, %0.02f, %llu, "
"%u\n", "%u\n",
@ -498,6 +554,37 @@ void maybe_update_plot_file(afl_state_t *afl, u32 t_bytes, double bitmap_cvg,
fflush(afl->fsrv.plot_file); fflush(afl->fsrv.plot_file);
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
if (unlikely(afl->coverage_estimation)) {
/* Update log file for coverage estimation */
/*Fields in the file:
relative_time, total_paths, abundant_paths, lower_estimate,
higher_estimate, max_path_number, max_path_count, second_max_path_number
second_max_path_count, path_frequenzies... */
fprintf(afl->coverage_log_file,
"%llu, %llu, %llu, %0.02f%%, %0.02f%%, %llu, %llu, %llu, %llu",
((afl->prev_run_time + get_cur_time() - afl->start_time) / 1000),
afl->total_paths, afl->abundant_paths,
afl->lower_coverage_estimate * 100,
afl->upper_coverage_estimate * 100, afl->max_path_number,
afl->max_path_count, afl->second_max_path_number,
afl->second_max_path_count);
for (u8 i = 0; i < afl->abundant_cut_off; i++) {
fprintf(afl->coverage_log_file, ", %u", afl->path_frequenzy[i]);
}
fprintf(afl->coverage_log_file, "\n");
fflush(afl->coverage_log_file);
}
#endif
} }
/* Check terminal dimensions after resize. */ /* Check terminal dimensions after resize. */
@ -511,6 +598,7 @@ static void check_term_size(afl_state_t *afl) {
if (ioctl(1, TIOCGWINSZ, &ws)) { return; } if (ioctl(1, TIOCGWINSZ, &ws)) { return; }
if (ws.ws_row == 0 || ws.ws_col == 0) { return; } if (ws.ws_row == 0 || ws.ws_col == 0) { return; }
if (ws.ws_row < 24 || ws.ws_col < 79) { afl->term_too_small = 1; } if (ws.ws_row < 24 || ws.ws_col < 79) { afl->term_too_small = 1; }
} }
@ -520,6 +608,123 @@ static void check_term_size(afl_state_t *afl) {
void show_stats(afl_state_t *afl) { void show_stats(afl_state_t *afl) {
if (unlikely(afl->coverage_estimation)) {
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
u64 cur_time = get_cur_time();
if (unlikely(cur_time - afl->start_time > afl->next_save_time)) {
write_coverage_file(afl);
}
#endif
afl->coverage_counter++;
if (afl->coverage_counter >= COVERAGE_INTERVAL &&
afl->max_path_number >= afl->abundant_cut_off) {
afl->coverage_counter = 0;
u64 n_rare = 0, s_rare = 0, sum_i = 0;
for (u8 i = 0; i < afl->abundant_cut_off; i++) {
s_rare += afl->path_frequenzy[i];
n_rare += afl->path_frequenzy[i] * (u64)(i + 1);
sum_i += afl->path_frequenzy[i] * (u64)i * (i + 1);
}
u64 s_total = s_rare + afl->abundant_paths;
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
afl->total_paths = s_total;
#endif
afl->n_fuzz_fill = (double)s_total / afl->n_fuzz_size;
if (likely(n_rare)) {
u64 n_abundant = afl->fsrv.total_execs - n_rare;
if (unlikely(n_abundant > afl->fsrv.total_execs)) /* Check underflow*/ {
FATAL(
"Total number of Paths or Executions is less than rare"
"Executions");
}
double c_rare = 1 - (double)afl->path_frequenzy[0] / n_rare;
if (likely(n_rare != 10)) {
double s_lower_estimate = 0;
if (c_rare == 0) /* all singleton */ {
s_lower_estimate =
(((double)afl->fsrv.total_execs - 1) / afl->fsrv.total_execs *
afl->path_frequenzy[0] * (afl->path_frequenzy[0] - 1) / 2.0);
} else {
double variation_rare =
(s_rare / c_rare) * ((double)sum_i / (n_rare * (n_rare - 10))) -
1;
if (variation_rare < 0) variation_rare = 0;
s_lower_estimate = afl->abundant_paths + s_rare / c_rare +
afl->path_frequenzy[0] / c_rare * variation_rare;
}
afl->upper_coverage_estimate =
(double)s_total / (s_lower_estimate + s_total);
double pi_zero =
(double)s_lower_estimate / (s_lower_estimate + s_total);
if (pi_zero < 0.5) {
afl->lower_coverage_estimate =
s_total / ((double)2 * s_total - afl->max_path_count);
} else {
double p_max_minus_one =
(double)(s_total - afl->max_path_count) / s_total,
p_max_minus_two = (double)(s_total - afl->max_path_count -
afl->second_max_path_count) /
s_total;
double pi_max_minus_one = pi_zero + (1 - pi_zero) * p_max_minus_one,
pi_max_minus_two = pi_zero + (1 - pi_zero) * p_max_minus_two;
double normalisation_factor = 0;
if (p_max_minus_one == p_max_minus_two) {
normalisation_factor = (1 - p_max_minus_two);
} else {
normalisation_factor =
(1 - p_max_minus_two) *
((p_max_minus_one - p_max_minus_two) /
(p_max_minus_one -
p_max_minus_two * pi_max_minus_two / pi_max_minus_one));
}
double estimated_paths =
s_total /
(1 - normalisation_factor / (1 - normalisation_factor) *
p_max_minus_two / (1 - p_max_minus_two));
afl->lower_coverage_estimate = (double)s_total / estimated_paths;
}
}
} else /*n_rare = 0*/ {
afl->lower_coverage_estimate = 1;
afl->upper_coverage_estimate = 1;
}
}
}
if (afl->pizza_is_served) { if (afl->pizza_is_served) {
show_stats_pizza(afl); show_stats_pizza(afl);
@ -1044,9 +1249,17 @@ void show_stats_normal(afl_state_t *afl) {
SAYF(bSTG bV bSTOP " total tmouts : " cRST "%-20s" bSTG bV "\n", tmp); SAYF(bSTG bV bSTOP " total tmouts : " cRST "%-20s" bSTG bV "\n", tmp);
/* Aaaalmost there... hold on! */ /* Aaaalmost there... hold on! */
if (likely(!afl->coverage_estimation)) {
SAYF(bVR bH cCYA bSTOP " fuzzing strategy yields " bSTG bH10 bH2 bHT bH10 bH2 SAYF(bVR bH cCYA bSTOP " fuzzing strategy yields " bSTG bH10 bH2 bHT bH10
bH bHB bH bSTOP cCYA " item geometry " bSTG bH5 bH2 bVL "\n"); bH2 bH bHB bH bSTOP cCYA " item geometry " bSTG bH5 bH2 bVL "\n");
} else {
SAYF(bVR bH cCYA bSTOP " code coverage information " bSTG bH10 bHT bH10 bH2
bH bHB bH bSTOP cCYA " item geometry " bSTG bH5 bH2 bVL "\n");
}
if (unlikely(afl->custom_only)) { if (unlikely(afl->custom_only)) {
@ -1068,9 +1281,26 @@ void show_stats_normal(afl_state_t *afl) {
} }
SAYF(bV bSTOP " bit flips : " cRST "%-36s " bSTG bV bSTOP if (likely(!afl->coverage_estimation)) {
" levels : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->max_depth)); SAYF(bV bSTOP " bit flips : " cRST "%-36s " bSTG bV bSTOP
" levels : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->max_depth));
} else {
if (afl->upper_coverage_estimate ||
afl->lower_coverage_estimate) /* If both are 0 they are not yet
calculated */
sprintf(tmp, "%6.2f%% - %6.2f%%", afl->lower_coverage_estimate * 100,
afl->upper_coverage_estimate * 100);
else
sprintf(tmp, "not yet calculated!");
SAYF(bV bSTOP " coverage : " cRST "%-27s" bSTG bV bSTOP
" levels : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->max_depth));
}
if (unlikely(!afl->skip_deterministic)) { if (unlikely(!afl->skip_deterministic)) {
@ -1084,9 +1314,38 @@ void show_stats_normal(afl_state_t *afl) {
} }
SAYF(bV bSTOP " byte flips : " cRST "%-36s " bSTG bV bSTOP if (likely(!afl->coverage_estimation)) {
" pending : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->pending_not_fuzzed)); SAYF(bV bSTOP " byte flips : " cRST "%-36s " bSTG bV bSTOP
" pending : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->pending_not_fuzzed));
} else {
sprintf(tmp, "%.2f%%", afl->n_fuzz_fill * 100);
SAYF(bV bSTOP " collision probability : ");
if (afl->n_fuzz_fill < 0.05) {
SAYF(cRST);
} else if (afl->n_fuzz_fill < 0.25) {
SAYF(bSTG);
} else if (afl->n_fuzz_fill < 0.5) {
SAYF(cYEL);
} else {
SAYF(cLRD);
}
SAYF("%-27s" bSTG bV bSTOP " pending : " cRST "%-10s" bSTG bV "\n", tmp,
u_stringify_int(IB(0), afl->pending_not_fuzzed));
}
if (unlikely(!afl->skip_deterministic)) { if (unlikely(!afl->skip_deterministic)) {
@ -1100,9 +1359,30 @@ void show_stats_normal(afl_state_t *afl) {
} }
SAYF(bV bSTOP " arithmetics : " cRST "%-36s " bSTG bV bSTOP if (likely(!afl->coverage_estimation)) {
" pend fav : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->pending_favored)); SAYF(bV bSTOP " arithmetics : " cRST "%-36s " bSTG bV bSTOP
" pend fav : " cRST "%-10s" bSTG bV "\n",
tmp, u_stringify_int(IB(0), afl->pending_favored));
} else {
if (unlikely(afl->custom_only)) {
strcpy(tmp, "disabled (custom-mutator-only mode)");
} else {
strcpy(tmp, "disabled (default, enable with -D)");
}
SAYF(bVR bH cCYA bSTOP
" fuzzing strategy yields " bSTG bH20 bH5 bH bVL bSTOP
" pend fav : " cRST "%-10s" bSTG bV "\n",
u_stringify_int(IB(0), afl->pending_favored));
}
if (unlikely(!afl->skip_deterministic)) { if (unlikely(!afl->skip_deterministic)) {
@ -2175,6 +2455,19 @@ void show_stats_pizza(afl_state_t *afl) {
} }
if (unlikely(afl->coverage_estimation)) {
SAYF(SET_G1 "\n" bSTG bVR bH cCYA bSTOP
" code coverage information " bSTG bH20 bH20 bH5 bH2 bVL "\n");
if (afl->upper_coverage_estimate && afl->lower_coverage_estimate)
sprintf(tmp, "%6.2f%% - %6.2f%%", afl->lower_coverage_estimate * 100,
afl->upper_coverage_estimate * 100);
else
sprintf(tmp, "oven not hot enough!");
SAYF(bV bSTOP " coverage : " cRST "%-63s" bSTG bV, tmp);
}
/* Last line */ /* Last line */
SAYF(SET_G1 "\n" bSTG bLB bH30 bH20 bH2 bH20 bH2 bH bRB bSTOP cRST RESET_G1); SAYF(SET_G1 "\n" bSTG bLB bH30 bH20 bH2 bH20 bH2 bH bRB bSTOP cRST RESET_G1);

View File

@ -157,8 +157,6 @@ static void usage(u8 *argv0, int more_help) {
" -Q - use binary-only instrumentation (QEMU mode)\n" " -Q - use binary-only instrumentation (QEMU mode)\n"
" -U - use unicorn-based instrumentation (Unicorn mode)\n" " -U - use unicorn-based instrumentation (Unicorn mode)\n"
" -W - use qemu-based instrumentation with Wine (Wine mode)\n" " -W - use qemu-based instrumentation with Wine (Wine mode)\n"
#endif
#if defined(__linux__)
" -X - use VM fuzzing (NYX mode - standalone mode)\n" " -X - use VM fuzzing (NYX mode - standalone mode)\n"
" -Y - use VM fuzzing (NYX mode - multiple instances mode)\n" " -Y - use VM fuzzing (NYX mode - multiple instances mode)\n"
#endif #endif
@ -251,11 +249,13 @@ static void usage(u8 *argv0, int more_help) {
" (must contain abort_on_error=1 and symbolize=0)\n" " (must contain abort_on_error=1 and symbolize=0)\n"
"MSAN_OPTIONS: custom settings for MSAN\n" "MSAN_OPTIONS: custom settings for MSAN\n"
" (must contain exitcode="STRINGIFY(MSAN_ERROR)" and symbolize=0)\n" " (must contain exitcode="STRINGIFY(MSAN_ERROR)" and symbolize=0)\n"
"AFL_ABUNDANT_CUT_OFF: cut of for code coverage estimatiors (default 10)\n"
"AFL_AUTORESUME: resume fuzzing if directory specified by -o already exists\n" "AFL_AUTORESUME: resume fuzzing if directory specified by -o already exists\n"
"AFL_BENCH_JUST_ONE: run the target just once\n" "AFL_BENCH_JUST_ONE: run the target just once\n"
"AFL_BENCH_UNTIL_CRASH: exit soon when the first crashing input has been found\n" "AFL_BENCH_UNTIL_CRASH: exit soon when the first crashing input has been found\n"
"AFL_CMPLOG_ONLY_NEW: do not run cmplog on initial testcases (good for resumes!)\n" "AFL_CMPLOG_ONLY_NEW: do not run cmplog on initial testcases (good for resumes!)\n"
"AFL_CRASH_EXITCODE: optional child exit code to be interpreted as crash\n" "AFL_CRASH_EXITCODE: optional child exit code to be interpreted as crash\n"
"AFL_CODE_COVERAGE: enable code coverage estimators\n"
"AFL_CUSTOM_MUTATOR_LIBRARY: lib with afl_custom_fuzz() to mutate inputs\n" "AFL_CUSTOM_MUTATOR_LIBRARY: lib with afl_custom_fuzz() to mutate inputs\n"
"AFL_CUSTOM_MUTATOR_ONLY: avoid AFL++'s internal mutators\n" "AFL_CUSTOM_MUTATOR_ONLY: avoid AFL++'s internal mutators\n"
"AFL_CYCLE_SCHEDULES: after completing a cycle, switch to a different -p schedule\n" "AFL_CYCLE_SCHEDULES: after completing a cycle, switch to a different -p schedule\n"
@ -1542,6 +1542,22 @@ int main(int argc, char **argv_orig, char **envp) {
} }
ACTF("Getting to work..."); ACTF("Getting to work...");
{
char *n_fuzz_size = get_afl_env("AFL_N_FUZZ_SIZE");
char *end = NULL;
if (n_fuzz_size == NULL ||
(afl->n_fuzz_size = strtoull(n_fuzz_size, &end, 0)) == 0) {
ACTF("Using default n_fuzz_size of 1 << 21");
afl->n_fuzz_size = (1 << 21);
}
if (get_afl_env("AFL_CRASH_ON_HASH_COLLISION"))
afl->crash_on_hash_collision = 1;
}
switch (afl->schedule) { switch (afl->schedule) {
@ -1583,7 +1599,24 @@ int main(int argc, char **argv_orig, char **envp) {
/* Dynamically allocate memory for AFLFast schedules */ /* Dynamically allocate memory for AFLFast schedules */
if (afl->schedule >= FAST && afl->schedule <= RARE) { if (afl->schedule >= FAST && afl->schedule <= RARE) {
afl->n_fuzz = ck_alloc(N_FUZZ_SIZE * sizeof(u32)); afl->n_fuzz = ck_alloc((afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32)) +
1) *
sizeof(u32 *));
if (afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)))
afl->n_fuzz[afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32))] =
ck_alloc(afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)));
for (u32 i = 0; i < afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32));
i++) {
;
afl->n_fuzz[i] = ck_alloc(MAX_ALLOC);
}
} }
@ -1592,6 +1625,82 @@ int main(int argc, char **argv_orig, char **envp) {
if (get_afl_env("AFL_NO_ARITH")) { afl->no_arith = 1; } if (get_afl_env("AFL_NO_ARITH")) { afl->no_arith = 1; }
if (get_afl_env("AFL_SHUFFLE_QUEUE")) { afl->shuffle_queue = 1; } if (get_afl_env("AFL_SHUFFLE_QUEUE")) { afl->shuffle_queue = 1; }
if (get_afl_env("AFL_EXPAND_HAVOC_NOW")) { afl->expand_havoc = 1; } if (get_afl_env("AFL_EXPAND_HAVOC_NOW")) { afl->expand_havoc = 1; }
if (get_afl_env("AFL_CODE_COVERAGE")) {
if (afl->skip_deterministic == 0) {
FATAL("AFL_CODE_COVERAGE is not compatible with -D");
}
if (afl->in_place_resume) {
FATAL("AFL_CODE_COVERAGE is not compatible with '-i -'/AFL_AUTORESUME");
}
afl->coverage_estimation = 1;
char *cut_off = get_afl_env("AFL_ABUNDANT_CUT_OFF");
if (cut_off == NULL || (afl->abundant_cut_off = atoi(cut_off)) <= 0) {
WARNF(
"Code Coverage is set but AFL_ABUNDANT_CUT_OFF is not valid default "
"10 is selected");
afl->abundant_cut_off = 10;
};
afl->path_frequenzy = ck_alloc((afl->abundant_cut_off) * sizeof(u32));
if (afl->n_fuzz == NULL) {
afl->n_fuzz = ck_alloc((afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32)) +
1) *
sizeof(u32 *));
if (afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)))
afl->n_fuzz[afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32))] =
ck_alloc(afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)));
for (u32 i = 0; i < afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32));
i++) {
afl->n_fuzz[i] = ck_alloc(MAX_ALLOC);
}
}
}
#if defined COVERAGE_ESTIMATION_LOGGING && COVERAGE_ESTIMATION_LOGGING
afl->n_fuzz_logged = ck_alloc((afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32)) +
1) *
sizeof(u32 *));
if (afl->n_fuzz_size * sizeof(u32) % (MAX_ALLOC / sizeof(u32) * sizeof(u32)))
afl->n_fuzz_logged[afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32))] =
ck_alloc(afl->n_fuzz_size * sizeof(u32) %
(MAX_ALLOC / sizeof(u32) * sizeof(u32)));
for (u32 i = 0; i < afl->n_fuzz_size * sizeof(u32) /
(MAX_ALLOC / sizeof(u32) * sizeof(u32));
i++) {
;
afl->n_fuzz_logged[i] = ck_alloc(MAX_ALLOC);
}
#endif
if (get_afl_env("AFL_ABUNDANT_CUT_OFF") && !afl->coverage_estimation) {
FATAL("AFL_ABUNDANT_CUT_OFF needs AFL_CODE_COVERAGE set!");
}
if (afl->afl_env.afl_autoresume) { if (afl->afl_env.afl_autoresume) {