Compare commits
1172 Commits
Author | SHA1 | Date | |
---|---|---|---|
3e2986dd78 | |||
1f06b55a8b | |||
7b6ee28291 | |||
94fe62ad8d | |||
608ea5f8ab | |||
00b5d3792d | |||
e9cb939956 | |||
7b6743f14c | |||
4b9c560b07 | |||
9d5a2d1b8e | |||
1e524d1f43 | |||
7c50ec5d05 | |||
7bdb22c70c | |||
3562de440e | |||
98e8838755 | |||
a2f3c3ee51 | |||
6056d4b140 | |||
d745da0c93 | |||
4fdd64d6d6 | |||
f2b7104cd6 | |||
c6af98bc35 | |||
67fabcb0be | |||
7b7914e1d6 | |||
9551be3f8e | |||
f22d28333b | |||
d8d6ea93cf | |||
0540d30274 | |||
9657b700b1 | |||
6c26434a63 | |||
42c677aa7b | |||
0373628adf | |||
d090232452 | |||
4d20b2d28b | |||
c67f98865e | |||
ca4a8c0f92 | |||
c57988e672 | |||
b847e0f414 | |||
338f1ae2f8 | |||
1f4b45b5bf | |||
30736ab7d6 | |||
46fed1f43f | |||
662662c15d | |||
d088c26045 | |||
3d2a095f2e | |||
9abae56a37 | |||
1f17aa082d | |||
c7c7476453 | |||
b883faa942 | |||
b1e0d6e640 | |||
ada61204f3 | |||
72344a2f5f | |||
0bb8cc46b4 | |||
05080f32ee | |||
32c4be4773 | |||
1759859ae2 | |||
e6f27c2645 | |||
c90dd00fc9 | |||
0da935183f | |||
4848c8d24c | |||
250000ee79 | |||
fc520a2bde | |||
6c9c8fdc65 | |||
24458ae285 | |||
36a0ab1193 | |||
1334851e7c | |||
b077a7963b | |||
3e65e1a029 | |||
e21738a248 | |||
4094750803 | |||
92352951d7 | |||
bb509765df | |||
6705953a49 | |||
b3edb657c2 | |||
a1889db9bf | |||
c83635e1d2 | |||
fd404194f2 | |||
5fa7861149 | |||
e969972c97 | |||
80e829f79d | |||
73db4fe993 | |||
22c3b519fa | |||
88077d4136 | |||
cfb0257c99 | |||
9a93688e3e | |||
48c2d51689 | |||
1dac69b1eb | |||
0da7ddb738 | |||
6e960f78d6 | |||
3282023047 | |||
c7db5b67f9 | |||
fc98d53e27 | |||
a5822c7c2f | |||
01cb84051a | |||
d9c2487c3e | |||
a7f3f87f41 | |||
1a4c0d2ecd | |||
eb37cec76e | |||
5ed993d74e | |||
5d5d1f70fc | |||
85b1ce00a8 | |||
51a88b17b3 | |||
b0e58baca2 | |||
f23cac854a | |||
0c3ba7d227 | |||
605f2bf969 | |||
dc3e2e8200 | |||
47d4f16189 | |||
74f70d0c74 | |||
be79ee7072 | |||
bf6a0159a9 | |||
fc3b483450 | |||
0dd1c39b5a | |||
60702fef7a | |||
a96f9a349a | |||
4fdd1129f0 | |||
4990823840 | |||
3d1a57deed | |||
47d8947471 | |||
80892b8fc5 | |||
ba21e20695 | |||
5b471986b8 | |||
da1b041098 | |||
95aa81045b | |||
b595727f2f | |||
d798a90f04 | |||
35d49c7c5c | |||
83f32c5248 | |||
683dcc4710 | |||
59bb4a6cc4 | |||
6afccdebcd | |||
7413316496 | |||
ad2a1b0574 | |||
942b85bb77 | |||
50c6031cc3 | |||
066d65d846 | |||
1441503c43 | |||
c96238d85f | |||
4103ee43e2 | |||
ba3c7bfe40 | |||
5ad760a77b | |||
6dfc9aaab0 | |||
ffdbe8a2fb | |||
c5083f8982 | |||
ca721404ef | |||
c563faa076 | |||
a2eb1f1412 | |||
c08eeb95ca | |||
7c755a675f | |||
4c0281adc8 | |||
022f364661 | |||
3dc72ffb6b | |||
ca361e1b6a | |||
b36d0adb46 | |||
2a4d77abc6 | |||
dfe03a346a | |||
1605291154 | |||
f180109d1a | |||
927cb770f0 | |||
82c6c8e563 | |||
79099a01f8 | |||
7c8246f18f | |||
51942b605d | |||
47dae21f4a | |||
ba12c7fc09 | |||
229a45c6a6 | |||
45219dee80 | |||
13e0fd3e1a | |||
68e8467859 | |||
293e255734 | |||
0b228fb0f5 | |||
01594dc416 | |||
46d5452c86 | |||
c7bb0a9638 | |||
ee409d18a6 | |||
5d5aa430d2 | |||
5792d492ed | |||
9ac3c53104 | |||
ee50f41d29 | |||
aeeca164b5 | |||
b2c96f66ee | |||
7f26d13345 | |||
5798c686b2 | |||
f53e6a6cf2 | |||
630eb943a5 | |||
4f42ecd815 | |||
1d00bde6c5 | |||
b1da7500b2 | |||
223c52827a | |||
e1c4a4ea7e | |||
b092ee4986 | |||
494c55df15 | |||
35cf1fa906 | |||
6e790552fe | |||
8971c9a5ce | |||
57db3e7f4f | |||
0540382c41 | |||
2263989e12 | |||
6aa470d641 | |||
0db57c3eec | |||
ce1fc4b27d | |||
5d4b0938d5 | |||
c208dcf9c5 | |||
d8317182ef | |||
e9288bcfad | |||
396de6fc9c | |||
891f4d3c8e | |||
9933a6f3ab | |||
3261e86a3a | |||
3ccebbf9c5 | |||
9c2c35b233 | |||
bf5b90f95a | |||
8385bc794a | |||
ac80678592 | |||
3c5edab724 | |||
9887f636ba | |||
fc48951468 | |||
0e9d0ebbfe | |||
84e03e4a4d | |||
044d3e823e | |||
3fc03d4b6b | |||
15b43e6ce1 | |||
d9fa6af1b1 | |||
ef5543680a | |||
881aef21fd | |||
26f3ec28ee | |||
657e4cc812 | |||
29f0bb1c6a | |||
c4363dd8b3 | |||
ff5882f415 | |||
7f56a93f5d | |||
2d7ac5f69c | |||
285a5cb38b | |||
51dbd0e957 | |||
772e33d550 | |||
77a3abfa5e | |||
a436ef47e5 | |||
b34751efbf | |||
d2ea1141eb | |||
3f0d642f9b | |||
7598efb8e6 | |||
0dca6ee038 | |||
a601b56627 | |||
ce0db35f18 | |||
d012358748 | |||
d4734f6d9b | |||
bcf123e1ee | |||
9a87e83bf4 | |||
c2ae24ab96 | |||
13c8a27faa | |||
a133aa9106 | |||
fa2b040476 | |||
69c9a176eb | |||
bdfac291ba | |||
90fd61d14a | |||
3d1be62c96 | |||
d1f59435ca | |||
1bea949f34 | |||
3c11a37757 | |||
e4f201707f | |||
59eaed2f70 | |||
1a65df2bee | |||
c08304ad3d | |||
2b8fea083e | |||
34d7a6357e | |||
ae9897ff7b | |||
d297738255 | |||
981a04eb27 | |||
2a00f32666 | |||
c2229b506e | |||
70cc32dc6d | |||
5385cc7618 | |||
2eeba2dbf0 | |||
09123d8617 | |||
713b069f40 | |||
0f7419fc91 | |||
7c9b7c0bc0 | |||
544b68044a | |||
33e43b11f8 | |||
a2d54218ad | |||
899fa59ab6 | |||
f2831419f5 | |||
d325fa5db8 | |||
7e67545b9d | |||
d84a8da1e9 | |||
f70760896c | |||
c9504867da | |||
744ad172e1 | |||
5e9286b9ea | |||
d22532d8d3 | |||
cd02f635db | |||
819a1539a6 | |||
ba7ec51232 | |||
1840c27b6f | |||
e996112fbc | |||
9b799aeddd | |||
fee1acf7e6 | |||
675d17d737 | |||
05119990b6 | |||
914eb79cbc | |||
92db443635 | |||
852c036cae | |||
a5943dc782 | |||
fc7bf33fb1 | |||
fcd06fa99c | |||
5f45f380c3 | |||
cb1256499f | |||
dd8ad4dfa3 | |||
8fc249d210 | |||
1d4f1e4879 | |||
de7058b75b | |||
5e8da2b85c | |||
056ebbff15 | |||
cf853fb249 | |||
fa628865c1 | |||
05a36f10ba | |||
cbf3d1bdee | |||
64cc345ec2 | |||
8dbc4a1423 | |||
452eb9f75b | |||
a5dc067331 | |||
27dc0e09e0 | |||
bcd802e6fd | |||
958a059477 | |||
c8bfce21ab | |||
d178b325ab | |||
077a3e32e0 | |||
fa3c0d8a37 | |||
d5b9cd4b73 | |||
ce5032cc29 | |||
04d693721b | |||
9eb66cccf4 | |||
9b72fe4880 | |||
1a89d428c9 | |||
d4c01c057b | |||
80543a809e | |||
104c0e29e9 | |||
7d36609722 | |||
119a0e0dce | |||
f336aa463b | |||
5f6bbc7dea | |||
17fc44d995 | |||
055af82026 | |||
e2f76dd41e | |||
1c79b82ab8 | |||
c38b05c80a | |||
bb186a2ece | |||
60e126c615 | |||
dd9003e59b | |||
2d9325aed9 | |||
53eb5ba2fb | |||
267dd634f1 | |||
7ab7862e4d | |||
59a7337bf1 | |||
c75124aefa | |||
2861f695ba | |||
43a7c0d601 | |||
311649f458 | |||
a5eafbff6c | |||
1d339527bb | |||
707b1701c3 | |||
b2465a05eb | |||
aa5f59b501 | |||
08c39c1552 | |||
f75535807a | |||
143c9d175e | |||
b0758ac8db | |||
77c06408c3 | |||
6bf52c1434 | |||
eebc2f4f1e | |||
016bdc36bb | |||
ed72b9ec1d | |||
615a8ff986 | |||
026096ccf3 | |||
87f2789e98 | |||
d9ed784298 | |||
0fd6315dfb | |||
28a1765781 | |||
0b5ad8ee84 | |||
3a78db2ade | |||
605b0e6465 | |||
7270cbe756 | |||
a790bf6cc2 | |||
86983f4060 | |||
319c7457ff | |||
e9be58b69c | |||
61d79f85c5 | |||
9baa402344 | |||
04e3b5a6d3 | |||
227b42b1d0 | |||
ac169c3087 | |||
3609912f41 | |||
6d2b8e3ed2 | |||
06ee6b1986 | |||
0090b3a3f0 | |||
452a4cf5be | |||
f63d2b0f55 | |||
29235a7935 | |||
ac0e855907 | |||
f7a5ea667b | |||
fce512db40 | |||
b427a53a6b | |||
2bb86863e6 | |||
26a3d1b53b | |||
2a0f082723 | |||
22da04f077 | |||
5933e787f9 | |||
6ce736aa91 | |||
830dcacc07 | |||
74d9da7203 | |||
9d3e6a869e | |||
1a15e98fff | |||
a594182314 | |||
9d87f408dd | |||
b4c2fc9416 | |||
ecf8db0014 | |||
ec7b14a3d6 | |||
c74686e20d | |||
4902bb91d2 | |||
237a475d9b | |||
686a595df3 | |||
1529bd070e | |||
29f8040f09 | |||
029bfc386f | |||
88905c65af | |||
492418ebd6 | |||
a9d549ca07 | |||
4721d869ad | |||
7aced239e8 | |||
d1de12d617 | |||
5deae7924f | |||
1d9d5936d9 | |||
7afad147d1 | |||
550ba4d772 | |||
25c8336c0c | |||
4bcb177f62 | |||
409a6517c1 | |||
d9fefafae7 | |||
16bd6aad7c | |||
5b06078a41 | |||
a3cf7c1224 | |||
6de9b37b2a | |||
1e5699ccaa | |||
bdec40ae5d | |||
56ce081ac7 | |||
861bd5e04b | |||
c5117b42ca | |||
0e2c832499 | |||
34caf7d781 | |||
c8061e5b35 | |||
a45cdb240c | |||
2b82492457 | |||
d51ec57d91 | |||
751e09f47b | |||
c1415b816a | |||
4217a6606c | |||
20177151e6 | |||
9a4552d6c4 | |||
8c58bdb504 | |||
f42c0047c8 | |||
1ca3317425 | |||
630ba07054 | |||
27ab84fbf1 | |||
4c07e37eae | |||
8f7e584b82 | |||
f6c08c3a1c | |||
9e38c43686 | |||
c8e6a59e7d | |||
bedd812e7b | |||
110cc27632 | |||
21ebfec79c | |||
8701cdcc2c | |||
e7ddd15fa5 | |||
5e47829462 | |||
e663897a8a | |||
b7ddde636b | |||
8764375357 | |||
657c1e9b9b | |||
0ed1cb4d31 | |||
741dcabd5d | |||
2342c85db4 | |||
091fa09e5e | |||
d8920e31f8 | |||
6a7f184c4e | |||
30666cb81e | |||
9242e0db8a | |||
add85f34d1 | |||
c7dbeb8568 | |||
179b118bc9 | |||
7884e0f449 | |||
10dae419d6 | |||
d2715336a5 | |||
41b07983f1 | |||
ef77d552e9 | |||
85f3ebc714 | |||
b7d741b18e | |||
da5ff0df0a | |||
7ae90a66c4 | |||
ee295801a6 | |||
03ba344e6d | |||
cc94e37ae1 | |||
8c1015ac39 | |||
dc7b607080 | |||
511ffc06d2 | |||
3b96c8ae13 | |||
226450600c | |||
845c32b5fb | |||
ee57053be1 | |||
a010d356de | |||
3b3ba08daa | |||
72cebac42e | |||
e1082f2548 | |||
128413690e | |||
b6b81a687d | |||
b8e61da8ab | |||
cda84594cc | |||
fd9f61a8c5 | |||
8b75680c7a | |||
09c4d9ed75 | |||
02082bcd2e | |||
fa6a0aba61 | |||
dbc62dbe56 | |||
1a25ccb618 | |||
0792cab566 | |||
b5cb99f6fe | |||
0a18bf8db5 | |||
48ad95f0e5 | |||
fd99ddb1d6 | |||
7e8a491500 | |||
8b8aaa93bd | |||
f511ebd125 | |||
83bf876255 | |||
41291d8c72 | |||
f9d4dcdd85 | |||
8a681bc163 | |||
53fa703755 | |||
51d6f863f5 | |||
be00dbc2ac | |||
65ffa4b472 | |||
7bd2899f2e | |||
43b162c222 | |||
5d9134d6ad | |||
6c8a47f7dc | |||
89c4fa3051 | |||
81aae9b54c | |||
54eca027a5 | |||
8fe6282164 | |||
8588becf47 | |||
a91d445b5f | |||
2d9e0f56b0 | |||
146eb32c31 | |||
550dc989b3 | |||
251264fde5 | |||
649076600d | |||
8521eb8413 | |||
699c16c7e0 | |||
6b50a001b0 | |||
24dd35ef96 | |||
8217b5ff81 | |||
7b3b707ae6 | |||
60b0c38022 | |||
17d4ae9a16 | |||
71621bbc52 | |||
ddc90e1176 | |||
47488dcd02 | |||
185d7f2ede | |||
376d1736a8 | |||
edeaf72ea8 | |||
c76dc73c7a | |||
964819d3fc | |||
68436b277b | |||
6106efa301 | |||
d59a76261d | |||
db19116ce6 | |||
a3421f8099 | |||
fd1d162149 | |||
5a28157ffd | |||
e3106e6f52 | |||
b3a0ecfd48 | |||
641a943d95 | |||
74a8f145e0 | |||
3cb7319ccd | |||
5f70bc5404 | |||
ee10461f48 | |||
9f911bf0bd | |||
88814be474 | |||
a2314fc37f | |||
176ede3fc8 | |||
d89fa8c7ad | |||
63087d9bd9 | |||
fad8a3feb8 | |||
02fba1cc7e | |||
2564eb6f8c | |||
495348261d | |||
7a939a6c59 | |||
425cbb9025 | |||
ab699bbeea | |||
bf8e07d168 | |||
ae958acc83 | |||
088aae7c25 | |||
75ac9c013c | |||
22e2362f0f | |||
c6bad07d75 | |||
83487415b1 | |||
9de3de6cdf | |||
aceb1af908 | |||
3a60f6a251 | |||
52dd5d479d | |||
ece717c424 | |||
57bc3c0701 | |||
630272bac5 | |||
5590d1836a | |||
e41ac9564b | |||
9f6394242b | |||
2c144e88fb | |||
5c7e84c5c8 | |||
52cae6d132 | |||
6eb752a65c | |||
ed3eb61610 | |||
0993bcdc4e | |||
d28bb47a38 | |||
9d7dd5a69f | |||
4bcd96ce68 | |||
e903f12e8c | |||
663889bc94 | |||
6c39e2cc2e | |||
77ce31c8ba | |||
4ce1ec3219 | |||
4544a90b6e | |||
34c9654690 | |||
d40a4fe366 | |||
5ec91ad529 | |||
602eafc223 | |||
5b9397f3dd | |||
269054e5c5 | |||
7603e49765 | |||
74aa826b60 | |||
16011ec057 | |||
7650547614 | |||
2e2e35ea61 | |||
46b250c243 | |||
883e6eb665 | |||
0648772967 | |||
82f1cf0735 | |||
0c4118ba7c | |||
2fc764a43f | |||
3ee6ff0f8c | |||
6ce72deb75 | |||
caf67efab1 | |||
3b287b7b0c | |||
def386ae43 | |||
db360332c4 | |||
9063002af2 | |||
08ca4d54a5 | |||
4c6d94ea5f | |||
eea020ee6a | |||
e46adb0be7 | |||
1f6c72ea1b | |||
5469112db9 | |||
bb506de0b8 | |||
139db40cfc | |||
89df436290 | |||
f65ca11e8e | |||
cdd176c52f | |||
a7c6b54514 | |||
3506077fd6 | |||
415be06c54 | |||
6eab6a55af | |||
ae59ed8e9e | |||
8d696c9396 | |||
667c2e58b2 | |||
bcd81c377d | |||
13eedcd5e8 | |||
0594bcb0cb | |||
29e4c315d9 | |||
ed808fe92f | |||
fc094dee13 | |||
a915c05740 | |||
aa6586a761 | |||
377adb776e | |||
65c3db8625 | |||
a7694e299a | |||
c85e0dc4f0 | |||
fcbaddfd2b | |||
b7395fa467 | |||
86dae0b16a | |||
0fbaaa4b32 | |||
3023a36d4d | |||
7a8e4018a5 | |||
f63c2ed145 | |||
73e9677a88 | |||
9cb32ca142 | |||
0ae9b0dff4 | |||
d9e39be4e8 | |||
ca7144161f | |||
80d5afa373 | |||
d4ae8fde5d | |||
78b0e108a6 | |||
22827e8070 | |||
c2779cc6f9 | |||
c14ccdf264 | |||
da13111117 | |||
f2ff029cc2 | |||
5525f8c9ef | |||
edbf41f786 | |||
8968bee836 | |||
11b3961e68 | |||
70236b854f | |||
b48999f7c9 | |||
b0c51964b7 | |||
2cdd75a274 | |||
af2fcbc1d4 | |||
a699dc2d2d | |||
235bb3235e | |||
ce0edcff2e | |||
934daec8ce | |||
042da82f6c | |||
655b63d2b2 | |||
a37c6eef3f | |||
e323512ca1 | |||
55ed2a443c | |||
7604dba6d6 | |||
f1d2332657 | |||
2412ff63e3 | |||
b8a8837875 | |||
133fba724a | |||
9a485106b0 | |||
e0c8a5c0c6 | |||
1a57253a1c | |||
ce21346053 | |||
5011877980 | |||
116531af58 | |||
4f1310db51 | |||
fce93647cc | |||
43928461e8 | |||
5b480f9451 | |||
f11cf068dc | |||
c866e9c3cc | |||
22726315c3 | |||
91f1c17c8f | |||
cfa89c6bc7 | |||
8c4435e7ef | |||
1fbb7cb2fa | |||
6cce577b90 | |||
90d7931af6 | |||
11f89ab785 | |||
39e2003630 | |||
6f5ba59d99 | |||
a0cc3dc101 | |||
d9ff3745d0 | |||
d50da14f60 | |||
4dad895bbb | |||
10365a22bd | |||
745408be60 | |||
8b5eafe7c5 | |||
0e9b208949 | |||
6f9a98c4a9 | |||
3a7a8704ee | |||
ef35c803da | |||
c31f4646cb | |||
492dbe9fb2 | |||
36514a2e4f | |||
4434aa103c | |||
de027b3b6b | |||
5ec859cece | |||
581cb16965 | |||
3d18243fd7 | |||
eb7db334a5 | |||
d73b400704 | |||
5c1b2412a5 | |||
24f5e8a6db | |||
f85edd9181 | |||
0aae4589ee | |||
2101c651f5 | |||
d61a4def5e | |||
4a2d944df3 | |||
8c9ce591e5 | |||
2cd4f4e7fa | |||
7777045c09 | |||
9283967a6e | |||
b18b8f553f | |||
0c26e43486 | |||
7c3d4e54eb | |||
b0f7691c29 | |||
af02fa1670 | |||
3b9545854f | |||
e131d0fc55 | |||
a7b9433850 | |||
e88509ac22 | |||
43458f7e8b | |||
67a6481b36 | |||
7514565858 | |||
8a9f3bcca8 | |||
132630d48d | |||
b659be1549 | |||
1a8d3f82f2 | |||
268339a683 | |||
464f1a78df | |||
3f864fa129 | |||
2d4b18f98e | |||
02b621e83c | |||
ca1e078597 | |||
f719b426e3 | |||
e1d3fe30de | |||
de90fd652e | |||
d4a0fd41cd | |||
feff8191ec | |||
c2feee4ed1 | |||
cf0fd0ff33 | |||
d63d69a1f6 | |||
96430fc9e0 | |||
b033a2d98b | |||
a15b5ef458 | |||
9100f3c416 | |||
2623851680 | |||
62a7ed635e | |||
7e1dba2e6b | |||
533e979010 | |||
b47344e8f7 | |||
6570327c2b | |||
45748fd752 | |||
8a861d0947 | |||
02e8919cbc | |||
66ca8618ea | |||
8578b6b01c | |||
75821d2943 | |||
89eefe044f | |||
3d9b2b459f | |||
b834f934d0 | |||
3521268269 | |||
93b8f17242 | |||
f4312255d9 | |||
33f9af976b | |||
7dcaf64eed | |||
c94f07bfdf | |||
d92d1e5685 | |||
3483715789 | |||
fb443eaf23 | |||
5b06413a5f | |||
93852a2312 | |||
92c91f25f0 | |||
72878cc14b | |||
2ddbaa439c | |||
da45eb6b41 | |||
64704403ca | |||
61039b410c | |||
b5d7f6bd60 | |||
9325a4fcbb | |||
ddc930f5ff | |||
9506772984 | |||
67ed01c560 | |||
b34bc6c426 | |||
74b4274e35 | |||
6ce3d7fede | |||
5e0e385e62 | |||
809f4922b9 | |||
f5535e348d | |||
9278f27d74 | |||
25c947cd5a | |||
02a5d96846 | |||
afc84438c6 | |||
331efc740d | |||
651133ea00 | |||
3670412d2e | |||
45399b7a56 | |||
701997a2a5 | |||
7a7630ae91 | |||
ccded9fc5c | |||
caf68e5bf2 | |||
5bcb7a8fed | |||
f35e71ca11 | |||
ce41f881a0 | |||
682e1d835c | |||
fb3a71bd25 | |||
cd9f596ce0 | |||
f97c5dba2a | |||
7e813ca492 | |||
0cbb406451 | |||
d918a9e85b | |||
01d266f2b4 | |||
639d108512 | |||
c64735df9e | |||
36d8f97972 | |||
bb255fdd79 | |||
b1aecf4ff0 | |||
94ab015a48 | |||
45b8e05793 | |||
e2ca7a6397 | |||
873f5a979e | |||
69d82cf9d7 | |||
471081e1b8 | |||
d85f5d4d62 | |||
9a11c8ac24 | |||
15bc729757 | |||
245e91eb27 | |||
f14b3bd9de | |||
b4e328803c | |||
4b631c9a19 | |||
efec2b5a99 | |||
86409092a1 | |||
7407e2fb11 | |||
695da56808 | |||
85ca0df989 | |||
0348ede4bc | |||
0f49463ede | |||
a7ee11a174 | |||
4dfb7bfe4f | |||
b12864b0be | |||
1a9ced30ce | |||
f68b9f5110 | |||
e03897a070 | |||
06b23c7dcb | |||
e637ca216e | |||
4e3fec2666 | |||
47a333af4d | |||
90786e2ce9 | |||
0bc3367b55 | |||
23e69f1107 | |||
77a63d8ccf | |||
bb8a4d71da | |||
fd43daee6a | |||
73531e5dd6 | |||
edaa25a48c | |||
45d668a671 | |||
7cd98f565f | |||
699df8f8ce | |||
72d10fee40 | |||
6403fa4f70 | |||
9c278df038 | |||
4b4244bcf6 | |||
fb481231b7 | |||
efda110240 | |||
462149de64 | |||
65c94d914d | |||
bf0fbc24ad | |||
2363a04750 | |||
e8cf04c90d | |||
ed10f3783b | |||
8bc2b52f65 | |||
34f1074ba3 | |||
88bcc34802 | |||
c96fdfac01 | |||
7d0e0cde0a | |||
1f2fa22dad | |||
c49b308794 | |||
8b1910e268 | |||
3deca3b09b | |||
17c59de1c2 | |||
319db6759b | |||
e0c052cad7 | |||
379c580658 | |||
6e08e80907 | |||
544a65db54 | |||
8e66289809 | |||
ea05d4ed13 | |||
269dc29efe | |||
28df6d5a57 | |||
d22b28d17b | |||
d0fc985e22 | |||
00aa689f40 | |||
9c1eb51e5c | |||
a8844eaceb | |||
989f0d00a5 | |||
79d87f8544 | |||
659366ac60 | |||
228f6c5dad | |||
da865cbb9d | |||
f6fbbf8150 | |||
158e881ad4 | |||
65afe5addc | |||
7b1fed78d8 | |||
580401591f | |||
0a88a6c530 | |||
7a543f4325 | |||
f0e6a7a4f8 | |||
65e63b9cf1 | |||
02c9ae91aa | |||
b9f88ab166 | |||
46683d6516 | |||
8bde2bb13e | |||
716d2029c0 | |||
5df7b12b75 | |||
fc48a58e64 | |||
1a79a36762 | |||
e80131bef5 | |||
d668f9697c | |||
4473904bc0 | |||
2eac714ac1 | |||
0ed0c9493e | |||
c8f6a31311 | |||
a88f6d3b06 | |||
d6500eb298 | |||
8dab1db53f | |||
51b2e86ec0 | |||
82ef4a90b0 | |||
bd4ecd83b1 | |||
6546a0a5fd | |||
48e26d8e06 | |||
a6c1c24d8e | |||
8af84c203c | |||
f760e80729 | |||
cb01d56616 | |||
78d7944bbf | |||
70c03944d2 | |||
1a13ae9a98 | |||
e1eadecf59 | |||
1181f26a50 | |||
c206fb319b | |||
dc1f93cf8c | |||
3f7c5f80fe | |||
d63afa5046 | |||
a9b9a76bbb | |||
6d6353d917 | |||
5485ea3cc7 | |||
9586c77174 | |||
773baf9391 | |||
d4a8a9df69 | |||
9747ac4221 | |||
583b8e849b | |||
ec4cae448b | |||
fe5b2c355f | |||
5559dd9c24 | |||
353d402aaf | |||
7a2f81e0d9 | |||
ca28ca6c05 | |||
066f60dd2f | |||
31e34c1634 | |||
e5f5d5886a | |||
86f920f645 | |||
1a8819c9ad | |||
4cdf4d2eaf | |||
edda239131 | |||
e25f7cefdc | |||
9e2a94532b | |||
2e15661f18 | |||
ca9c87dd45 | |||
028f8ced8f | |||
d8c221fade | |||
64fbf973e0 | |||
b9b497241b | |||
5ba3601697 | |||
8779412171 | |||
23c240a94a | |||
f189668dd6 | |||
41a4c99d01 | |||
c2e02df9a5 | |||
56e2c55914 | |||
3513ba2e51 | |||
591d6c59c7 | |||
1959812e83 | |||
faac1651e4 | |||
17681405bc | |||
dbb793646e | |||
69b7f2cae4 | |||
10c98c2a4c | |||
c3641fbd97 | |||
5f20137e9d | |||
2a68d37b4f | |||
6191af7440 | |||
4a88e17a26 | |||
e05519baf4 | |||
b28b46daeb | |||
105454e51c | |||
8948fdcf4c | |||
85a7195631 | |||
4b3ba3e51a | |||
0c1dcadfb4 | |||
0715396cb2 | |||
bdd455e6fb | |||
a6adb6be2c | |||
95b794744b | |||
b8fd0a1463 | |||
cacee58fb7 | |||
11d62d51c5 | |||
d2d7585db5 | |||
cfbff436a5 | |||
20815d6850 | |||
c775f40ebf | |||
3c0e8528e3 | |||
baf4ae52c4 | |||
94e2f52abf | |||
40a837780a | |||
b4c96d686f | |||
5700b3c750 | |||
781e65ea42 | |||
50fc76faa8 | |||
3f1a2d596f | |||
b6b4ab0bd8 | |||
8f378ab00a | |||
ab43289c1a | |||
a92952fa03 | |||
df74625df0 | |||
1cffe27185 | |||
08cb3f8ab5 | |||
c779262d89 | |||
d832ed68ea | |||
73c7136bdb | |||
6b06d4c74d | |||
fda210aabd | |||
691af58b73 | |||
e7d8272aed | |||
9578546048 | |||
1095840b98 | |||
7e49b92848 | |||
ca98778b28 | |||
533c612dc9 | |||
df5e61de83 | |||
41aebad83e | |||
fa7e3453d3 | |||
7bedd862d3 | |||
6f3e63de05 | |||
c7a93d50c4 | |||
bb9be56dd6 | |||
f72430711b | |||
8ad6e7c140 | |||
382439b58f | |||
18db645a69 | |||
08080e70a6 | |||
b3eadc6a77 | |||
da23c4254b | |||
2702a713d7 | |||
233a628047 | |||
c3fbf5dca3 | |||
bcdb69289f | |||
8b66d95038 | |||
f9bc5d361f | |||
ac13902a93 | |||
50e26ea1a7 | |||
0bd6fda98b | |||
0dbd37a20b | |||
036282185b | |||
cc933bcc0c | |||
33f9f911c1 | |||
1afa0d7861 | |||
aebb14ceeb | |||
64daf10584 | |||
7123dd9d29 | |||
99d24d13f1 | |||
ea39e6d6e7 | |||
6d8813eb1a | |||
2a51358b15 | |||
d7caf1b0f2 | |||
07346cb06d | |||
a772cb0d00 | |||
58636fd54e | |||
d02483e40d | |||
30679edc1a | |||
dc16d8d998 | |||
7c3e78c2c9 | |||
87b17af751 | |||
9d3816abff | |||
e64aa57576 | |||
99a3644db1 | |||
bab487f4b5 | |||
0f299a3bef | |||
dc0fed6e0c | |||
0e563656fb | |||
c979d405e9 | |||
4d55a427f0 | |||
437b399e74 | |||
c9460c4788 | |||
183d9a3f07 | |||
44ffcf7ede | |||
3e00184c9a | |||
21db86af9b | |||
699a1b0120 | |||
549b01e68b | |||
a9328e40b3 | |||
60cbe5b4be | |||
fa2b164429 | |||
62f1bfed99 | |||
3d7a2fc869 | |||
a3a86afd0d |
@ -6,7 +6,7 @@
|
||||
# Written and maintaned by Andrea Fioraldi <andreafioraldi@gmail.com>
|
||||
#
|
||||
# Copyright 2015, 2016, 2017 Google Inc. All rights reserved.
|
||||
# Copyright 2019-2020 AFLplusplus Project. All rights reserved.
|
||||
# Copyright 2019-2022 AFLplusplus Project. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@ -19,40 +19,22 @@ import subprocess
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
|
||||
# string_re = re.compile('(\\"(\\\\.|[^"\\\\])*\\")') # future use
|
||||
|
||||
with open(".clang-format") as f:
|
||||
fmt = f.read()
|
||||
|
||||
CLANG_FORMAT_BIN = os.getenv("CLANG_FORMAT_BIN")
|
||||
if CLANG_FORMAT_BIN is None:
|
||||
o = 0
|
||||
try:
|
||||
p = subprocess.Popen(["clang-format-11", "--version"], stdout=subprocess.PIPE)
|
||||
o, _ = p.communicate()
|
||||
o = str(o, "utf-8")
|
||||
o = re.sub(r".*ersion ", "", o)
|
||||
# o = o[len("clang-format version "):].strip()
|
||||
o = o[: o.find(".")]
|
||||
o = int(o)
|
||||
except:
|
||||
print("clang-format-11 is needed. Aborted.")
|
||||
exit(1)
|
||||
# if o < 7:
|
||||
# if subprocess.call(['which', 'clang-format-7'], stdout=subprocess.PIPE) == 0:
|
||||
# CLANG_FORMAT_BIN = 'clang-format-7'
|
||||
# elif subprocess.call(['which', 'clang-format-8'], stdout=subprocess.PIPE) == 0:
|
||||
# CLANG_FORMAT_BIN = 'clang-format-8'
|
||||
# elif subprocess.call(['which', 'clang-format-9'], stdout=subprocess.PIPE) == 0:
|
||||
# CLANG_FORMAT_BIN = 'clang-format-9'
|
||||
# elif subprocess.call(['which', 'clang-format-11'], stdout=subprocess.PIPE) == 0:
|
||||
# CLANG_FORMAT_BIN = 'clang-format-11'
|
||||
# else:
|
||||
# print ("clang-format 7 or above is needed. Aborted.")
|
||||
# exit(1)
|
||||
else:
|
||||
CLANG_FORMAT_BIN = "clang-format-11"
|
||||
CURRENT_LLVM = os.getenv('LLVM_VERSION', 14)
|
||||
CLANG_FORMAT_BIN = os.getenv("CLANG_FORMAT_BIN", "")
|
||||
|
||||
if shutil.which(CLANG_FORMAT_BIN) is None:
|
||||
CLANG_FORMAT_BIN = f"clang-format-{CURRENT_LLVM}"
|
||||
|
||||
if shutil.which(CLANG_FORMAT_BIN) is None:
|
||||
print(f"[!] clang-format-{CURRENT_LLVM} is needed. Aborted.")
|
||||
exit(1)
|
||||
|
||||
COLUMN_LIMIT = 80
|
||||
for line in fmt.split("\n"):
|
||||
|
@ -1,65 +1,75 @@
|
||||
!/coresight_mode
|
||||
*.dSYM
|
||||
*.o
|
||||
*.pyc
|
||||
*.so
|
||||
.sync_tmp
|
||||
.test
|
||||
.test2
|
||||
.sync_tmp
|
||||
*.o
|
||||
*.so
|
||||
*.pyc
|
||||
*.dSYM
|
||||
as
|
||||
ld
|
||||
in
|
||||
out
|
||||
core*
|
||||
.git
|
||||
.dockerignore
|
||||
.github
|
||||
CITATION.cff
|
||||
CONTRIBUTING.md
|
||||
Changelog.md
|
||||
Dockerfile
|
||||
LICENSE
|
||||
TODO.md
|
||||
afl-analyze
|
||||
afl-as
|
||||
afl-clang
|
||||
afl-clang\+\+
|
||||
afl-clang-fast
|
||||
afl-clang-fast\+\+
|
||||
afl-clang-lto
|
||||
afl-clang-lto\+\+
|
||||
afl-fuzz
|
||||
afl-g\+\+
|
||||
afl-gcc
|
||||
afl-gcc-fast
|
||||
afl-g\+\+-fast
|
||||
afl-gotcpu
|
||||
afl-ld
|
||||
afl-ld-lto
|
||||
afl-qemu-trace
|
||||
afl-showmap
|
||||
afl-tmin
|
||||
afl-analyze.8
|
||||
afl-as
|
||||
afl-as.8
|
||||
afl-clang-fast\+\+.8
|
||||
afl-clang
|
||||
afl-clang-fast
|
||||
afl-clang-fast.8
|
||||
afl-clang-fast\+\+
|
||||
afl-clang-fast\+\+.8
|
||||
afl-clang-lto
|
||||
afl-clang-lto.8
|
||||
afl-clang-lto\+\+
|
||||
afl-clang-lto\+\+.8
|
||||
afl-clang\+\+
|
||||
afl-cmin.8
|
||||
afl-cmin.bash.8
|
||||
afl-fuzz
|
||||
afl-fuzz.8
|
||||
afl-gcc.8
|
||||
afl-gcc-fast.8
|
||||
afl-g\+\+
|
||||
afl-g\+\+-fast
|
||||
afl-g\+\+-fast.8
|
||||
afl-gcc
|
||||
afl-gcc-fast
|
||||
afl-gcc-fast.8
|
||||
afl-gcc.8
|
||||
afl-gotcpu
|
||||
afl-gotcpu.8
|
||||
afl-ld
|
||||
afl-ld-lto
|
||||
afl-plot.8
|
||||
afl-qemu-trace
|
||||
afl-showmap
|
||||
afl-showmap.8
|
||||
afl-system-config.8
|
||||
afl-tmin
|
||||
afl-tmin.8
|
||||
afl-whatsup.8
|
||||
as
|
||||
core*
|
||||
examples/afl_frida/afl-frida
|
||||
examples/afl_frida/frida-gum-example.c
|
||||
examples/afl_frida/frida-gum.h
|
||||
examples/afl_frida/libtestinstr.so
|
||||
examples/afl_network_proxy/afl-network-client
|
||||
examples/afl_network_proxy/afl-network-server
|
||||
in
|
||||
ld
|
||||
out
|
||||
qemu_mode/libcompcov/compcovtest
|
||||
qemu_mode/qemu-*
|
||||
test/unittests/unit_hash
|
||||
test/unittests/unit_list
|
||||
test/unittests/unit_maybe_alloc
|
||||
test/unittests/unit_preallocable
|
||||
test/unittests/unit_rand
|
||||
unicorn_mode/samples/*/\.test-*
|
||||
unicorn_mode/samples/*/output
|
||||
unicorn_mode/unicornafl
|
||||
test/unittests/unit_maybe_alloc
|
||||
test/unittests/unit_preallocable
|
||||
test/unittests/unit_list
|
||||
test/unittests/unit_rand
|
||||
test/unittests/unit_hash
|
||||
examples/afl_network_proxy/afl-network-server
|
||||
examples/afl_network_proxy/afl-network-client
|
||||
examples/afl_frida/afl-frida
|
||||
examples/afl_frida/libtestinstr.so
|
||||
examples/afl_frida/frida-gum-example.c
|
||||
examples/afl_frida/frida-gum.h
|
7
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -8,10 +8,11 @@ assignees: ''
|
||||
---
|
||||
|
||||
**IMPORTANT**
|
||||
1. You have verified that the issue to be present in the current `dev` branch
|
||||
2. Please supply the command line options and relevant environment variables, e.g. a copy-paste of the contents of `out/default/fuzzer_setup`
|
||||
1. You have verified that the issue to be present in the current `dev` branch.
|
||||
2. Please supply the command line options and relevant environment variables,
|
||||
e.g., a copy-paste of the contents of `out/default/fuzzer_setup`.
|
||||
|
||||
Thank you for making afl++ better!
|
||||
Thank you for making AFL++ better!
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
25
.github/workflows/build_aflplusplus_docker.yaml
vendored
@ -1,25 +0,0 @@
|
||||
name: Publish Docker Images
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ stable ]
|
||||
# paths:
|
||||
# - Dockerfile
|
||||
|
||||
jobs:
|
||||
push_to_registry:
|
||||
name: Push Docker images to Dockerhub
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@master
|
||||
- name: Login to Dockerhub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKER_USERNAME }}
|
||||
password: ${{ secrets.DOCKER_TOKEN }}
|
||||
- name: Publish aflpp to Registry
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
tags: aflplusplus/aflplusplus:latest
|
44
.github/workflows/ci.yml
vendored
@ -2,29 +2,55 @@ name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ stable, dev ]
|
||||
branches:
|
||||
- stable
|
||||
- dev
|
||||
pull_request:
|
||||
branches: [ stable, dev ]
|
||||
branches:
|
||||
- dev # No need for stable-pull-request, as that equals dev-push
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: '${{ matrix.os }}'
|
||||
linux:
|
||||
runs-on: "${{ matrix.os }}"
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-20.04, ubuntu-18.04]
|
||||
os: [ubuntu-22.04, ubuntu-20.04, ubuntu-18.04]
|
||||
env:
|
||||
AFL_SKIP_CPUFREQ: 1
|
||||
AFL_I_DONT_CARE_ABOUT_MISSING_CRASHES: 1
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
- name: debug
|
||||
run: apt-cache search plugin-dev | grep gcc- ; echo ; apt-cache search clang-format- | grep clang-format-
|
||||
run: apt-cache search plugin-dev | grep gcc-; echo; apt-cache search clang-format- | grep clang-format-
|
||||
- name: update
|
||||
run: sudo apt-get update && sudo apt-get upgrade -y
|
||||
- name: install packages
|
||||
run: sudo apt-get install -y -m -f --install-suggests build-essential git libtool libtool-bin automake bison libglib2.0-0 clang llvm-dev libc++-dev findutils libcmocka-dev python3-dev python3-setuptools ninja-build
|
||||
- name: compiler installed
|
||||
run: gcc -v ; echo ; clang -v
|
||||
run: gcc -v; echo; clang -v
|
||||
- name: install gcc plugin
|
||||
run: sudo apt-get install -y -m -f --install-suggests $(readlink /usr/bin/gcc)-plugin-dev
|
||||
- name: build afl++
|
||||
run: make distrib ASAN_BUILD=1
|
||||
- name: run tests
|
||||
run: sudo -E ./afl-system-config ; export AFL_SKIP_CPUFREQ=1 ; make tests
|
||||
run: sudo -E ./afl-system-config; make tests
|
||||
macos:
|
||||
runs-on: macOS-latest
|
||||
env:
|
||||
AFL_MAP_SIZE: 65536
|
||||
AFL_SKIP_CPUFREQ: 1
|
||||
AFL_I_DONT_CARE_ABOUT_MISSING_CRASHES: 1
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: install
|
||||
run: brew install make gcc llvm
|
||||
- name: fix install
|
||||
run: cd /usr/local/bin; ln -s gcc-11 gcc; ln -s g++-11 g++; which gcc; gcc -v
|
||||
- name: build
|
||||
run: export PATH=/usr/local/Cellar/llvm/*/":$PATH"; export CC=/usr/local/Cellar/llvm/*/bin/clang; export CXX="$CC"++; export LLVM_CONFIG=/usr/local/Cellar/llvm/*/bin/llvm-config; sudo -E ./afl-system-config; gmake ASAN_BUILD=1
|
||||
- name: frida
|
||||
run: export CC=/usr/local/Cellar/llvm/*/bin/clang; export CXX="$CC"++; cd frida_mode; gmake
|
||||
- name: run tests
|
||||
run: sudo -E ./afl-system-config; export CC=/usr/local/Cellar/llvm/*/bin/clang; export CXX="$CC"++; export PATH=/usr/local/Cellar/llvm/*/":/usr/local/bin:$PATH"; export LLVM_CONFIG=/usr/local/Cellar/llvm/*/bin/llvm-config; gmake tests
|
||||
- name: force frida test for MacOS
|
||||
run: export AFL_PATH=`pwd`; /usr/local/bin/gcc -o test-instr test-instr.c; mkdir in; echo > in/in; AFL_NO_UI=1 ./afl-fuzz -O -i in -o out -V 5 -- ./test-instr
|
||||
|
33
.github/workflows/code-format.yml
vendored
Normal file
@ -0,0 +1,33 @@
|
||||
name: Formatting
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- stable
|
||||
- dev
|
||||
pull_request:
|
||||
branches:
|
||||
- dev # No need for stable-pull-request, as that equals dev-push
|
||||
|
||||
jobs:
|
||||
code-format-check:
|
||||
name: Check code format
|
||||
if: ${{ 'false' == 'true' }} # Disable the job
|
||||
runs-on: ubuntu-22.04
|
||||
container: docker.io/aflplusplus/aflplusplus:dev
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Format
|
||||
run: |
|
||||
git config --global --add safe.directory /__w/AFLplusplus/AFLplusplus
|
||||
apt-get update
|
||||
apt-get install -y clang-format-${LLVM_VERSION}
|
||||
make code-format
|
||||
- name: Check if code needed formatting
|
||||
run: |
|
||||
git --no-pager -c color.ui=always diff HEAD
|
||||
if ! git diff HEAD --quiet; then
|
||||
echo "[!] Please run 'make code-format' and push its changes."
|
||||
exit 1
|
||||
fi
|
43
.github/workflows/codeql-analysis.yml
vendored
@ -2,31 +2,32 @@ name: "CodeQL"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ stable, dev ]
|
||||
branches:
|
||||
- stable
|
||||
- dev
|
||||
pull_request:
|
||||
branches: [ stable, dev ]
|
||||
branches:
|
||||
- dev # No need for stable-pull-request, as that equals dev-push
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
language: [ 'cpp' ]
|
||||
|
||||
container: # We use a previous image as it's expected to have all the dependencies
|
||||
image: docker.io/aflplusplus/aflplusplus:dev
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v1
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@v1
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v1
|
||||
- name: Fix for using external repo in container build # https://github.com/actions/checkout/issues/760
|
||||
run: git config --global --add safe.directory /__w/AFLplusplus/AFLplusplus
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v2
|
||||
with:
|
||||
languages: cpp, python
|
||||
- name: Build AFLplusplus # Rebuild because CodeQL needs to monitor the build process
|
||||
env:
|
||||
CC: gcc # These are symlinked to the version used in the container build
|
||||
CXX: g++
|
||||
run: make -i all # Best effort using -i
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v2
|
||||
|
75
.github/workflows/container.yml
vendored
Normal file
@ -0,0 +1,75 @@
|
||||
name: Container
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- stable
|
||||
- dev
|
||||
tags:
|
||||
- "*"
|
||||
pull_request:
|
||||
branches:
|
||||
- dev # No need for stable-pull-request, as that equals dev-push
|
||||
|
||||
jobs:
|
||||
build-and-test-amd64:
|
||||
name: Test amd64 image
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Build amd64
|
||||
uses: docker/build-push-action@v3
|
||||
with:
|
||||
context: .
|
||||
tags: aflplusplus:test-amd64
|
||||
load: true
|
||||
cache-to: type=gha,mode=max
|
||||
build-args: |
|
||||
TEST_BUILD=1
|
||||
- name: Test amd64
|
||||
run: >
|
||||
docker run --rm aflplusplus:test-amd64 bash -c "
|
||||
apt-get update &&
|
||||
apt-get install -y libcmocka-dev &&
|
||||
make -i tests
|
||||
"
|
||||
|
||||
push:
|
||||
name: Push amd64 and arm64 images
|
||||
runs-on: ubuntu-latest
|
||||
needs:
|
||||
- build-and-test-amd64
|
||||
if: ${{ github.event_name == 'push' && github.repository == 'AFLplusplus/AFLplusplus' }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v2
|
||||
with:
|
||||
platforms: arm64
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Login to docker.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
username: ${{ secrets.DOCKER_USERNAME }}
|
||||
password: ${{ secrets.DOCKER_TOKEN }}
|
||||
- name: Set tags to push
|
||||
id: push-tags
|
||||
run: |
|
||||
PUSH_TAGS=docker.io/aflplusplus/aflplusplus:${GITHUB_REF_NAME}
|
||||
if [ "${GITHUB_REF_NAME}" = "stable" ]; then
|
||||
PUSH_TAGS=${PUSH_TAGS},docker.io/aflplusplus/aflplusplus:latest
|
||||
fi
|
||||
export PUSH_TAGS
|
||||
echo "::set-output name=PUSH_TAGS::${PUSH_TAGS}"
|
||||
- name: Push to docker.io registry
|
||||
uses: docker/build-push-action@v3
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: ${{ steps.push-tags.outputs.PUSH_TAGS }}
|
||||
cache-from: type=gha
|
13
.github/workflows/rust_custom_mutator.yml
vendored
@ -2,9 +2,12 @@ name: Rust Custom Mutators
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ stable, dev ]
|
||||
branches:
|
||||
- stable
|
||||
- dev
|
||||
pull_request:
|
||||
branches: [ stable, dev ]
|
||||
branches:
|
||||
- dev # No need for stable-pull-request, as that equals dev-push
|
||||
|
||||
jobs:
|
||||
test:
|
||||
@ -15,9 +18,9 @@ jobs:
|
||||
working-directory: custom_mutators/rust
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-20.04]
|
||||
os: [ubuntu-22.04, ubuntu-20.04]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
- name: Install Rust Toolchain
|
||||
uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
@ -27,4 +30,4 @@ jobs:
|
||||
- name: Run General Tests
|
||||
run: cargo test
|
||||
- name: Run Tests for afl_internals feature flag
|
||||
run: cd custom_mutator && cargo test --features=afl_internals
|
||||
run: cd custom_mutator && cargo test --features=afl_internals
|
||||
|
12
.gitignore
vendored
@ -30,6 +30,7 @@ afl-g++-fast
|
||||
afl-gotcpu
|
||||
afl-ld
|
||||
afl-ld-lto
|
||||
afl-cs-proxy
|
||||
afl-qemu-trace
|
||||
afl-showmap
|
||||
afl-tmin
|
||||
@ -54,6 +55,7 @@ afl-showmap.8
|
||||
afl-system-config.8
|
||||
afl-tmin.8
|
||||
afl-whatsup.8
|
||||
afl-persistent-config.8
|
||||
afl-c++
|
||||
afl-cc
|
||||
afl-lto
|
||||
@ -85,4 +87,14 @@ gmon.out
|
||||
afl-frida-trace.so
|
||||
utils/afl_network_proxy/afl-network-client
|
||||
utils/afl_network_proxy/afl-network-server
|
||||
utils/plot_ui/afl-plot-ui
|
||||
*.o.tmp
|
||||
utils/afl_proxy/afl-proxy
|
||||
utils/optimin/build
|
||||
utils/optimin/optimin
|
||||
utils/persistent_mode/persistent_demo
|
||||
utils/persistent_mode/persistent_demo_new
|
||||
utils/persistent_mode/test-instr
|
||||
!coresight_mode
|
||||
!coresight_mode/coresight-trace
|
||||
vuln_prog
|
15
.gitmodules
vendored
@ -10,3 +10,18 @@
|
||||
[submodule "custom_mutators/gramatron/json-c"]
|
||||
path = custom_mutators/gramatron/json-c
|
||||
url = https://github.com/json-c/json-c
|
||||
[submodule "coresight_mode/patchelf"]
|
||||
path = coresight_mode/patchelf
|
||||
url = https://github.com/NixOS/patchelf.git
|
||||
[submodule "coresight_mode/coresight-trace"]
|
||||
path = coresight_mode/coresight-trace
|
||||
url = https://github.com/RICSecLab/coresight-trace.git
|
||||
[submodule "nyx_mode/libnyx"]
|
||||
path = nyx_mode/libnyx
|
||||
url = https://github.com/nyx-fuzz/libnyx.git
|
||||
[submodule "nyx_mode/QEMU-Nyx"]
|
||||
path = nyx_mode/QEMU-Nyx
|
||||
url = https://github.com/nyx-fuzz/qemu-nyx.git
|
||||
[submodule "nyx_mode/packer"]
|
||||
path = nyx_mode/packer
|
||||
url = https://github.com/nyx-fuzz/packer.git
|
||||
|
19
Android.bp
@ -1,3 +1,11 @@
|
||||
//
|
||||
// NOTE: This file is outdated. None of the AFL++ team uses Android hence
|
||||
// we need users to keep this updated.
|
||||
// In the current state it will likely fail, please send fixes!
|
||||
// Also, this should build frida_mode.
|
||||
//
|
||||
|
||||
|
||||
cc_defaults {
|
||||
name: "afl-defaults",
|
||||
|
||||
@ -68,6 +76,7 @@ cc_binary {
|
||||
srcs: [
|
||||
"src/afl-fuzz*.c",
|
||||
"src/afl-common.c",
|
||||
"src/afl-forkserver.c",
|
||||
"src/afl-sharedmem.c",
|
||||
"src/afl-forkserver.c",
|
||||
"src/afl-performance.c",
|
||||
@ -175,7 +184,7 @@ cc_binary_host {
|
||||
}
|
||||
|
||||
cc_library_static {
|
||||
name: "afl-llvm-rt",
|
||||
name: "afl-compiler-rt",
|
||||
compile_multilib: "64",
|
||||
vendor_available: true,
|
||||
host_supported: true,
|
||||
@ -225,6 +234,7 @@ cc_library_headers {
|
||||
],
|
||||
}
|
||||
|
||||
/*
|
||||
cc_prebuilt_library_static {
|
||||
name: "libfrida-gum",
|
||||
compile_multilib: "64",
|
||||
@ -272,7 +282,7 @@ cc_binary {
|
||||
],
|
||||
|
||||
static_libs: [
|
||||
"afl-llvm-rt",
|
||||
"afl-compiler-rt",
|
||||
"libfrida-gum",
|
||||
],
|
||||
|
||||
@ -290,6 +300,7 @@ cc_binary {
|
||||
"utils/afl_frida/android",
|
||||
],
|
||||
}
|
||||
*/
|
||||
|
||||
cc_binary {
|
||||
name: "afl-fuzz-32",
|
||||
@ -346,7 +357,7 @@ cc_binary_host {
|
||||
}
|
||||
|
||||
cc_library_static {
|
||||
name: "afl-llvm-rt-32",
|
||||
name: "afl-compiler-rt-32",
|
||||
compile_multilib: "32",
|
||||
vendor_available: true,
|
||||
host_supported: true,
|
||||
@ -385,6 +396,7 @@ cc_library_static {
|
||||
],
|
||||
}
|
||||
|
||||
/*
|
||||
cc_prebuilt_library_static {
|
||||
name: "libfrida-gum-32",
|
||||
compile_multilib: "32",
|
||||
@ -400,6 +412,7 @@ cc_prebuilt_library_static {
|
||||
"utils/afl_frida/android/arm",
|
||||
],
|
||||
}
|
||||
*/
|
||||
|
||||
subdirs = [
|
||||
"custom_mutators",
|
||||
|
31
CITATION.cff
Normal file
@ -0,0 +1,31 @@
|
||||
cff-version: 1.2.0
|
||||
message: "If you use this software, please cite it as below."
|
||||
authors:
|
||||
- given-names: Marc
|
||||
family-names: Heuse
|
||||
email: mh@mh-sec.de
|
||||
- given-names: Heiko
|
||||
family-names: Eißfeldt
|
||||
email: heiko.eissfeldt@hexco.de
|
||||
- given-names: Andrea
|
||||
family-names: Fioraldi
|
||||
email: andreafioraldi@gmail.com
|
||||
- given-names: Dominik
|
||||
family-names: Maier
|
||||
email: mail@dmnk.co
|
||||
title: "AFL++"
|
||||
version: 4.00c
|
||||
type: software
|
||||
date-released: 2022-01-26
|
||||
url: "https://github.com/AFLplusplus/AFLplusplus"
|
||||
keywords:
|
||||
- fuzzing
|
||||
- fuzzer
|
||||
- fuzz-testing
|
||||
- instrumentation
|
||||
- afl-fuzz
|
||||
- qemu
|
||||
- llvm
|
||||
- unicorn-emulator
|
||||
- securiy
|
||||
license: AGPL-3.0-or-later
|
@ -1,4 +1,6 @@
|
||||
# How to submit a Pull Request to AFLplusplus
|
||||
# Contributing to AFL++
|
||||
|
||||
## How to submit a pull request
|
||||
|
||||
All contributions (pull requests) must be made against our `dev` branch.
|
||||
|
||||
@ -15,10 +17,43 @@ project, or added a file in a directory we already format, otherwise run:
|
||||
./.custom-format.py -i file-that-you-have-created.c
|
||||
```
|
||||
|
||||
Regarding the coding style, please follow the AFL style.
|
||||
No camel case at all and use AFL's macros wherever possible
|
||||
(e.g. WARNF, FATAL, MAP_SIZE, ...).
|
||||
Regarding the coding style, please follow the AFL style. No camel case at all
|
||||
and use AFL's macros wherever possible (e.g., WARNF, FATAL, MAP_SIZE, ...).
|
||||
|
||||
Remember that AFLplusplus has to build and run on many platforms, so
|
||||
generalize your Makefiles/GNUmakefile (or your patches to our pre-existing
|
||||
Makefiles) to be as generic as possible.
|
||||
Remember that AFL++ has to build and run on many platforms, so generalize your
|
||||
Makefiles/GNUmakefile (or your patches to our pre-existing Makefiles) to be as
|
||||
generic as possible.
|
||||
|
||||
## How to contribute to the docs
|
||||
|
||||
We welcome contributions to our docs.
|
||||
|
||||
Before creating a new file, please check if your content matches an existing
|
||||
file in one the following folders:
|
||||
|
||||
* [docs/](docs/) (this is where you can find most of our docs content)
|
||||
* [frida_mode/](frida_mode/)
|
||||
* [instrumentation/](instrumentation/)
|
||||
* [qemu_mode/](qemu_mode/)
|
||||
* [unicorn_mode/](unicorn_mode/)
|
||||
|
||||
When working on the docs, please keep the following guidelines in mind:
|
||||
|
||||
* Edit or create Markdown files and use Markdown markup.
|
||||
* Do: fuzzing_gui_program.md
|
||||
* Don't: fuzzing_gui_program.txt
|
||||
* Use underscore in file names.
|
||||
* Do: fuzzing_network_service.md
|
||||
* Don't: fuzzing-network-service.md
|
||||
* Use a maximum of 80 characters per line to make reading in a console easier.
|
||||
* Make all pull requests against `dev`, see
|
||||
[#how-to-submit-a-pull-request-to-afl](#how-to-submit-a-pull-request-to-afl).
|
||||
|
||||
And finally, here are some best practices for writing docs content:
|
||||
|
||||
* Use clear and simple language.
|
||||
* Structure your content with headings and paragraphs.
|
||||
* Use bulleted lists to present similar content in a way that makes it easy to
|
||||
scan.
|
||||
* Use numbered lists for procedures or prioritizing.
|
||||
* Link to related content, for example, prerequisites or in-depth discussions.
|
121
Dockerfile
@ -1,73 +1,88 @@
|
||||
#
|
||||
# This Dockerfile for AFLplusplus uses Ubuntu 20.04 focal and
|
||||
# installs LLVM 11 from llvm.org for afl-clang-lto support :-)
|
||||
# It also installs gcc/g++ 10 from the Ubuntu development platform
|
||||
# since focal has gcc-10 but not g++-10 ...
|
||||
# This Dockerfile for AFLplusplus uses Ubuntu 22.04 jammy and
|
||||
# installs LLVM 14 for afl-clang-lto support.
|
||||
#
|
||||
# GCC 11 is used instead of 12 because genhtml for afl-cov doesn't like it.
|
||||
#
|
||||
|
||||
FROM ubuntu:20.04 AS aflplusplus
|
||||
FROM ubuntu:22.04 AS aflplusplus
|
||||
LABEL "maintainer"="afl++ team <afl@aflplus.plus>"
|
||||
LABEL "about"="AFLplusplus docker image"
|
||||
LABEL "about"="AFLplusplus container image"
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
env NO_ARCH_OPT 1
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get -y install --no-install-suggests --no-install-recommends \
|
||||
automake \
|
||||
ninja-build \
|
||||
bison flex \
|
||||
build-essential \
|
||||
git \
|
||||
python3 python3-dev python3-setuptools python-is-python3 \
|
||||
libtool libtool-bin \
|
||||
libglib2.0-dev \
|
||||
wget vim jupp nano bash-completion less \
|
||||
apt-utils apt-transport-https ca-certificates gnupg dialog \
|
||||
libpixman-1-dev \
|
||||
gnuplot-nox \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN echo "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-12 main" >> /etc/apt/sources.list && \
|
||||
wget -qO - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
|
||||
|
||||
RUN echo "deb http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu focal main" >> /etc/apt/sources.list && \
|
||||
apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 1E9377A2BA9EF27F
|
||||
ENV NO_ARCH_OPT=1
|
||||
ENV IS_DOCKER=1
|
||||
|
||||
RUN apt-get update && apt-get full-upgrade -y && \
|
||||
apt-get -y install --no-install-suggests --no-install-recommends \
|
||||
gcc-10 g++-10 gcc-10-plugin-dev gcc-10-multilib gcc-multilib gdb lcov \
|
||||
clang-12 clang-tools-12 libc++1-12 libc++-12-dev \
|
||||
libc++abi1-12 libc++abi-12-dev libclang1-12 libclang-12-dev \
|
||||
libclang-common-12-dev libclang-cpp12 libclang-cpp12-dev liblld-12 \
|
||||
liblld-12-dev liblldb-12 liblldb-12-dev libllvm12 libomp-12-dev \
|
||||
libomp5-12 lld-12 lldb-12 llvm-12 llvm-12-dev llvm-12-runtime llvm-12-tools \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
apt-get install -y --no-install-recommends wget ca-certificates && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 0
|
||||
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 0
|
||||
ENV LLVM_VERSION=14
|
||||
ENV GCC_VERSION=11
|
||||
|
||||
ENV LLVM_CONFIG=llvm-config-12
|
||||
RUN echo "deb [signed-by=/etc/apt/keyrings/llvm-snapshot.gpg.key] http://apt.llvm.org/jammy/ llvm-toolchain-jammy-${LLVM_VERSION} main" > /etc/apt/sources.list.d/llvm.list && \
|
||||
wget -qO /etc/apt/keyrings/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get -y install --no-install-recommends \
|
||||
make cmake automake meson ninja-build bison flex \
|
||||
git xz-utils bzip2 wget jupp nano bash-completion less vim joe ssh psmisc \
|
||||
python3 python3-dev python3-setuptools python-is-python3 \
|
||||
libtool libtool-bin libglib2.0-dev \
|
||||
apt-utils apt-transport-https gnupg dialog \
|
||||
gnuplot-nox libpixman-1-dev \
|
||||
gcc-${GCC_VERSION} g++-${GCC_VERSION} gcc-${GCC_VERSION}-plugin-dev gdb lcov \
|
||||
clang-${LLVM_VERSION} clang-tools-${LLVM_VERSION} libc++1-${LLVM_VERSION} \
|
||||
libc++-${LLVM_VERSION}-dev libc++abi1-${LLVM_VERSION} libc++abi-${LLVM_VERSION}-dev \
|
||||
libclang1-${LLVM_VERSION} libclang-${LLVM_VERSION}-dev \
|
||||
libclang-common-${LLVM_VERSION}-dev libclang-cpp${LLVM_VERSION} \
|
||||
libclang-cpp${LLVM_VERSION}-dev liblld-${LLVM_VERSION} \
|
||||
liblld-${LLVM_VERSION}-dev liblldb-${LLVM_VERSION} liblldb-${LLVM_VERSION}-dev \
|
||||
libllvm${LLVM_VERSION} libomp-${LLVM_VERSION}-dev libomp5-${LLVM_VERSION} \
|
||||
lld-${LLVM_VERSION} lldb-${LLVM_VERSION} llvm-${LLVM_VERSION} \
|
||||
llvm-${LLVM_VERSION}-dev llvm-${LLVM_VERSION}-runtime llvm-${LLVM_VERSION}-tools \
|
||||
$([ "$(dpkg --print-architecture)" = "amd64" ] && echo gcc-${GCC_VERSION}-multilib gcc-multilib) \
|
||||
$([ "$(dpkg --print-architecture)" = "arm64" ] && echo libcapstone-dev) && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
# gcc-multilib is only used for -m32 support on x86
|
||||
# libcapstone-dev is used for coresight_mode on arm64
|
||||
|
||||
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${GCC_VERSION} 0 && \
|
||||
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-${GCC_VERSION} 0 && \
|
||||
update-alternatives --install /usr/bin/clang clang /usr/bin/clang-${LLVM_VERSION} 0 && \
|
||||
update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-${LLVM_VERSION} 0
|
||||
|
||||
RUN wget -qO- https://sh.rustup.rs | CARGO_HOME=/etc/cargo sh -s -- -y -q --no-modify-path
|
||||
ENV PATH=$PATH:/etc/cargo/bin
|
||||
|
||||
ENV LLVM_CONFIG=llvm-config-${LLVM_VERSION}
|
||||
ENV AFL_SKIP_CPUFREQ=1
|
||||
ENV AFL_TRY_AFFINITY=1
|
||||
ENV AFL_I_DONT_CARE_ABOUT_MISSING_CRASHES=1
|
||||
|
||||
RUN git clone --depth=1 https://github.com/vanhauser-thc/afl-cov /afl-cov
|
||||
RUN cd /afl-cov && make install && cd ..
|
||||
RUN git clone --depth=1 https://github.com/vanhauser-thc/afl-cov && \
|
||||
(cd afl-cov && make install) && rm -rf afl-cov
|
||||
|
||||
# Build currently broken
|
||||
ENV NO_CORESIGHT=1
|
||||
ENV NO_UNICORN_ARM64=1
|
||||
|
||||
COPY . /AFLplusplus
|
||||
WORKDIR /AFLplusplus
|
||||
COPY . .
|
||||
|
||||
RUN export CC=gcc-10 && export CXX=g++-10 && make clean && \
|
||||
make distrib && make install && make clean
|
||||
ARG CC=gcc-$GCC_VERSION
|
||||
ARG CXX=g++-$GCC_VERSION
|
||||
|
||||
RUN sh -c 'echo set encoding=utf-8 > /root/.vimrc'
|
||||
RUN echo '. /etc/bash_completion' >> ~/.bashrc
|
||||
RUN echo 'alias joe="joe --wordwrap --joe_state -nobackup"' >> ~/.bashrc
|
||||
RUN echo "export PS1='"'[afl++ \h] \w$(__git_ps1) \$ '"'" >> ~/.bashrc
|
||||
ENV IS_DOCKER="1"
|
||||
# Used in CI to prevent a 'make clean' which would remove the binaries to be tested
|
||||
ARG TEST_BUILD
|
||||
|
||||
# Disabled until we have the container ready
|
||||
#COPY --from=aflplusplus/afl-dyninst /usr/local/lib/libdyninstAPI_RT.so /usr/local/lib/libdyninstAPI_RT.so
|
||||
#COPY --from=aflplusplus/afl-dyninst /afl-dyninst/libAflDyninst.so /usr/local/lib/libAflDyninst.so
|
||||
RUN sed -i.bak 's/^ -/ /g' GNUmakefile && \
|
||||
make clean && make distrib && \
|
||||
([ "${TEST_BUILD}" ] || (make install && make clean)) && \
|
||||
mv GNUmakefile.bak GNUmakefile
|
||||
|
||||
RUN echo "set encoding=utf-8" > /root/.vimrc && \
|
||||
echo ". /etc/bash_completion" >> ~/.bashrc && \
|
||||
echo 'alias joe="joe --wordwrap --joe_state -nobackup"' >> ~/.bashrc && \
|
||||
echo "export PS1='"'[afl++ \h] \w$(__git_ps1) \$ '"'" >> ~/.bashrc
|
||||
|
219
GNUmakefile
@ -10,7 +10,7 @@
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at:
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
|
||||
# For Heiko:
|
||||
@ -32,7 +32,7 @@ VERSION = $(shell grep '^$(HASH)define VERSION ' ../config.h | cut -d '"' -f
|
||||
# PROGS intentionally omit afl-as, which gets installed elsewhere.
|
||||
|
||||
PROGS = afl-fuzz afl-showmap afl-tmin afl-gotcpu afl-analyze
|
||||
SH_PROGS = afl-plot afl-cmin afl-cmin.bash afl-whatsup afl-system-config
|
||||
SH_PROGS = afl-plot afl-cmin afl-cmin.bash afl-whatsup afl-system-config afl-persistent-config afl-cc
|
||||
MANPAGES=$(foreach p, $(PROGS) $(SH_PROGS), $(p).8) afl-as.8
|
||||
ASAN_OPTIONS=detect_leaks=0
|
||||
|
||||
@ -42,7 +42,7 @@ ARCH = $(shell uname -m)
|
||||
$(info [*] Compiling afl++ for OS $(SYS) on ARCH $(ARCH))
|
||||
|
||||
ifdef NO_SPLICING
|
||||
override CFLAGS += -DNO_SPLICING
|
||||
override CFLAGS_OPT += -DNO_SPLICING
|
||||
endif
|
||||
|
||||
ifdef ASAN_BUILD
|
||||
@ -76,9 +76,9 @@ else
|
||||
endif
|
||||
endif
|
||||
|
||||
ifeq "$(shell echo 'int main() {return 0; }' | $(CC) -fno-move-loop-invariants -fdisable-tree-cunrolli -x c - -o .test 2>/dev/null && echo 1 || echo 0 ; rm -f .test )" "1"
|
||||
SPECIAL_PERFORMANCE += -fno-move-loop-invariants -fdisable-tree-cunrolli
|
||||
endif
|
||||
#ifeq "$(shell echo 'int main() {return 0; }' | $(CC) -fno-move-loop-invariants -fdisable-tree-cunrolli -x c - -o .test 2>/dev/null && echo 1 || echo 0 ; rm -f .test )" "1"
|
||||
# SPECIAL_PERFORMANCE += -fno-move-loop-invariants -fdisable-tree-cunrolli
|
||||
#endif
|
||||
|
||||
#ifeq "$(shell echo 'int main() {return 0; }' | $(CC) $(CFLAGS) -Werror -x c - -march=native -o .test 2>/dev/null && echo 1 || echo 0 ; rm -f .test )" "1"
|
||||
# ifndef SOURCE_DATE_EPOCH
|
||||
@ -92,9 +92,13 @@ ifneq "$(SYS)" "Darwin"
|
||||
# SPECIAL_PERFORMANCE += -march=native
|
||||
#endif
|
||||
# OS X does not like _FORTIFY_SOURCE=2
|
||||
ifndef DEBUG
|
||||
CFLAGS_OPT += -D_FORTIFY_SOURCE=2
|
||||
endif
|
||||
ifndef DEBUG
|
||||
CFLAGS_OPT += -D_FORTIFY_SOURCE=2
|
||||
endif
|
||||
else
|
||||
# On some odd MacOS system configurations, the Xcode sdk path is not set correctly
|
||||
SDK_LD = -L$(shell xcrun --show-sdk-path)/usr/lib
|
||||
LDFLAGS += $(SDK_LD)
|
||||
endif
|
||||
|
||||
ifeq "$(SYS)" "SunOS"
|
||||
@ -115,13 +119,13 @@ endif
|
||||
|
||||
ifdef PROFILING
|
||||
$(info Compiling with profiling information, for analysis: gprof ./afl-fuzz gmon.out > prof.txt)
|
||||
CFLAGS_OPT += -pg -DPROFILING=1
|
||||
LDFLAGS += -pg
|
||||
override CFLAGS_OPT += -pg -DPROFILING=1
|
||||
override LDFLAGS += -pg
|
||||
endif
|
||||
|
||||
ifdef INTROSPECTION
|
||||
$(info Compiling with introspection documentation)
|
||||
CFLAGS_OPT += -DINTROSPECTION=1
|
||||
override CFLAGS_OPT += -DINTROSPECTION=1
|
||||
endif
|
||||
|
||||
ifneq "$(ARCH)" "x86_64"
|
||||
@ -136,40 +140,41 @@ endif
|
||||
|
||||
ifdef DEBUG
|
||||
$(info Compiling DEBUG version of binaries)
|
||||
CFLAGS += -ggdb3 -O0 -Wall -Wextra -Werror
|
||||
override CFLAGS += -ggdb3 -O0 -Wall -Wextra -Werror $(CFLAGS_OPT)
|
||||
else
|
||||
CFLAGS ?= -O3 -funroll-loops $(CFLAGS_OPT)
|
||||
CFLAGS ?= -O2 $(CFLAGS_OPT) # -funroll-loops is slower on modern compilers
|
||||
endif
|
||||
|
||||
override CFLAGS += -g -Wno-pointer-sign -Wno-variadic-macros -Wall -Wextra -Wpointer-arith \
|
||||
-I include/ -DAFL_PATH=\"$(HELPER_PATH)\" \
|
||||
-DBIN_PATH=\"$(BIN_PATH)\" -DDOC_PATH=\"$(DOC_PATH)\"
|
||||
override CFLAGS += -g -Wno-pointer-sign -Wno-variadic-macros -Wall -Wextra -Wno-pointer-arith \
|
||||
-fPIC -I include/ -DAFL_PATH=\"$(HELPER_PATH)\" \
|
||||
-DBIN_PATH=\"$(BIN_PATH)\" -DDOC_PATH=\"$(DOC_PATH)\"
|
||||
# -fstack-protector
|
||||
|
||||
ifeq "$(SYS)" "FreeBSD"
|
||||
override CFLAGS += -I /usr/local/include/
|
||||
LDFLAGS += -L /usr/local/lib/
|
||||
override LDFLAGS += -L /usr/local/lib/
|
||||
endif
|
||||
|
||||
ifeq "$(SYS)" "DragonFly"
|
||||
override CFLAGS += -I /usr/local/include/
|
||||
LDFLAGS += -L /usr/local/lib/
|
||||
override LDFLAGS += -L /usr/local/lib/
|
||||
endif
|
||||
|
||||
ifeq "$(SYS)" "OpenBSD"
|
||||
override CFLAGS += -I /usr/local/include/ -mno-retpoline
|
||||
LDFLAGS += -Wl,-z,notext -L /usr/local/lib/
|
||||
override LDFLAGS += -Wl,-z,notext -L /usr/local/lib/
|
||||
endif
|
||||
|
||||
ifeq "$(SYS)" "NetBSD"
|
||||
override CFLAGS += -I /usr/pkg/include/
|
||||
LDFLAGS += -L /usr/pkg/lib/
|
||||
override LDFLAGS += -L /usr/pkg/lib/
|
||||
endif
|
||||
|
||||
ifeq "$(SYS)" "Haiku"
|
||||
SHMAT_OK=0
|
||||
override CFLAGS += -DUSEMMAP=1 -Wno-error=format -fPIC
|
||||
LDFLAGS += -Wno-deprecated-declarations -lgnu -lnetwork
|
||||
SPECIAL_PERFORMANCE += -DUSEMMAP=1
|
||||
override CFLAGS += -DUSEMMAP=1 -Wno-error=format
|
||||
override LDFLAGS += -Wno-deprecated-declarations -lgnu -lnetwork
|
||||
#SPECIAL_PERFORMANCE += -DUSEMMAP=1
|
||||
endif
|
||||
|
||||
AFL_FUZZ_FILES = $(wildcard src/afl-fuzz*.c)
|
||||
@ -241,25 +246,22 @@ else
|
||||
endif
|
||||
|
||||
ifneq "$(filter Linux GNU%,$(SYS))" ""
|
||||
ifndef DEBUG
|
||||
override CFLAGS += -D_FORTIFY_SOURCE=2
|
||||
endif
|
||||
LDFLAGS += -ldl -lrt -lm
|
||||
override LDFLAGS += -ldl -lrt -lm
|
||||
endif
|
||||
|
||||
ifneq "$(findstring FreeBSD, $(SYS))" ""
|
||||
override CFLAGS += -pthread
|
||||
LDFLAGS += -lpthread
|
||||
override LDFLAGS += -lpthread
|
||||
endif
|
||||
|
||||
ifneq "$(findstring NetBSD, $(SYS))" ""
|
||||
override CFLAGS += -pthread
|
||||
LDFLAGS += -lpthread
|
||||
override LDFLAGS += -lpthread
|
||||
endif
|
||||
|
||||
ifneq "$(findstring OpenBSD, $(SYS))" ""
|
||||
override CFLAGS += -pthread
|
||||
LDFLAGS += -lpthread
|
||||
override LDFLAGS += -lpthread
|
||||
endif
|
||||
|
||||
COMM_HDR = include/alloc-inl.h include/config.h include/debug.h include/types.h
|
||||
@ -310,12 +312,14 @@ all: test_x86 test_shm test_python ready $(PROGS) afl-as llvm gcc_plugin test_bu
|
||||
|
||||
.PHONY: llvm
|
||||
llvm:
|
||||
-$(MAKE) -j -f GNUmakefile.llvm
|
||||
-$(MAKE) -j$(nproc) -f GNUmakefile.llvm
|
||||
@test -e afl-cc || { echo "[-] Compiling afl-cc failed. You seem not to have a working compiler." ; exit 1; }
|
||||
|
||||
.PHONY: gcc_plugin
|
||||
gcc_plugin:
|
||||
ifneq "$(SYS)" "Darwin"
|
||||
-$(MAKE) -f GNUmakefile.gcc_plugin
|
||||
endif
|
||||
|
||||
.PHONY: man
|
||||
man: $(MANPAGES)
|
||||
@ -343,14 +347,15 @@ performance-test: source-only
|
||||
help:
|
||||
@echo "HELP --- the following make targets exist:"
|
||||
@echo "=========================================="
|
||||
@echo "all: just the main afl++ binaries"
|
||||
@echo "binary-only: everything for binary-only fuzzing: qemu_mode, unicorn_mode, libdislocator, libtokencap"
|
||||
@echo "source-only: everything for source code fuzzing: gcc_plugin, libdislocator, libtokencap"
|
||||
@echo "all: the main afl++ binaries and llvm/gcc instrumentation"
|
||||
@echo "binary-only: everything for binary-only fuzzing: frida_mode, nyx_mode, qemu_mode, frida_mode, unicorn_mode, coresight_mode, libdislocator, libtokencap"
|
||||
@echo "source-only: everything for source code fuzzing: nyx_mode, libdislocator, libtokencap"
|
||||
@echo "distrib: everything (for both binary-only and source code fuzzing)"
|
||||
@echo "man: creates simple man pages from the help option of the programs"
|
||||
@echo "install: installs everything you have compiled with the build option above"
|
||||
@echo "clean: cleans everything compiled (not downloads when on a checkout)"
|
||||
@echo "deepclean: cleans everything including downloads"
|
||||
@echo "uninstall: uninstall afl++ from the system"
|
||||
@echo "code-format: format the code, do this before you commit and send a PR please!"
|
||||
@echo "tests: this runs the test framework. It is more catered for the developers, but if you run into problems this helps pinpointing the problem"
|
||||
@echo "unit: perform unit tests (based on cmocka and GNU linker)"
|
||||
@ -362,14 +367,18 @@ help:
|
||||
@echo Known build environment options:
|
||||
@echo "=========================================="
|
||||
@echo STATIC - compile AFL++ static
|
||||
@echo ASAN_BUILD - compiles with memory sanitizer for debug purposes
|
||||
@echo ASAN_BUILD - compiles AFL++ with memory sanitizer for debug purposes
|
||||
@echo UBSAN_BUILD - compiles AFL++ tools with undefined behaviour sanitizer for debug purposes
|
||||
@echo DEBUG - no optimization, -ggdb3, all warnings and -Werror
|
||||
@echo PROFILING - compile afl-fuzz with profiling information
|
||||
@echo INTROSPECTION - compile afl-fuzz with mutation introspection
|
||||
@echo NO_PYTHON - disable python support
|
||||
@echo NO_SPLICING - disables splicing mutation in afl-fuzz, not recommended for normal fuzzing
|
||||
@echo NO_NYX - disable building nyx mode dependencies
|
||||
@echo "NO_CORESIGHT - disable building coresight (arm64 only)"
|
||||
@echo NO_UNICORN_ARM64 - disable building unicorn on arm64
|
||||
@echo AFL_NO_X86 - if compiling on non-intel/amd platforms
|
||||
@echo "LLVM_CONFIG - if your distro doesn't use the standard name for llvm-config (e.g. Debian)"
|
||||
@echo "LLVM_CONFIG - if your distro doesn't use the standard name for llvm-config (e.g., Debian)"
|
||||
@echo "=========================================="
|
||||
@echo e.g.: make ASAN_BUILD=1
|
||||
|
||||
@ -381,7 +390,7 @@ test_x86:
|
||||
@echo "[*] Testing the PATH environment variable..."
|
||||
@test "$${PATH}" != "$${PATH#.:}" && { echo "Please remove current directory '.' from PATH to avoid recursion of 'as', thanks!"; echo; exit 1; } || :
|
||||
@echo "[*] Checking for the ability to compile x86 code..."
|
||||
@echo 'main() { __asm__("xorb %al, %al"); }' | $(CC) $(CFLAGS) -w -x c - -o .test1 || ( echo; echo "Oops, looks like your compiler can't generate x86 code."; echo; echo "Don't panic! You can use the LLVM or QEMU mode, but see docs/INSTALL first."; echo "(To ignore this error, set AFL_NO_X86=1 and try again.)"; echo; exit 1 )
|
||||
@echo 'int main() { __asm__("xorb %al, %al"); }' | $(CC) $(CFLAGS) $(LDFLAGS) -w -x c - -o .test1 || ( echo; echo "Oops, looks like your compiler can't generate x86 code."; echo; echo "Don't panic! You can use the LLVM or QEMU mode, but see docs/INSTALL first."; echo "(To ignore this error, set AFL_NO_X86=1 and try again.)"; echo; exit 1 )
|
||||
@rm -f .test1
|
||||
else
|
||||
test_x86:
|
||||
@ -417,7 +426,7 @@ afl-as: src/afl-as.c include/afl-as.h $(COMM_HDR) | test_x86
|
||||
@ln -sf afl-as as
|
||||
|
||||
src/afl-performance.o : $(COMM_HDR) src/afl-performance.c include/hash.h
|
||||
$(CC) $(CFLAGS) -Iinclude $(SPECIAL_PERFORMANCE) -O3 -fno-unroll-loops -c src/afl-performance.c -o src/afl-performance.o
|
||||
$(CC) $(CFLAGS) $(CFLAGS_OPT) -Iinclude -c src/afl-performance.c -o src/afl-performance.o
|
||||
|
||||
src/afl-common.o : $(COMM_HDR) src/afl-common.c include/common.h
|
||||
$(CC) $(CFLAGS) $(CFLAGS_FLTO) -c src/afl-common.c -o src/afl-common.o
|
||||
@ -525,7 +534,7 @@ code-format:
|
||||
ifndef AFL_NO_X86
|
||||
test_build: afl-cc afl-gcc afl-as afl-showmap
|
||||
@echo "[*] Testing the CC wrapper afl-cc and its instrumentation output..."
|
||||
@unset AFL_MAP_SIZE AFL_USE_UBSAN AFL_USE_CFISAN AFL_USE_LSAN AFL_USE_ASAN AFL_USE_MSAN; ASAN_OPTIONS=detect_leaks=0 AFL_INST_RATIO=100 AFL_PATH=. ./afl-cc test-instr.c -o test-instr 2>&1 || (echo "Oops, afl-cc failed"; exit 1 )
|
||||
@unset AFL_MAP_SIZE AFL_USE_UBSAN AFL_USE_CFISAN AFL_USE_LSAN AFL_USE_ASAN AFL_USE_MSAN; ASAN_OPTIONS=detect_leaks=0 AFL_INST_RATIO=100 AFL_PATH=. ./afl-cc test-instr.c $(LDFLAGS) -o test-instr 2>&1 || (echo "Oops, afl-cc failed"; exit 1 )
|
||||
ASAN_OPTIONS=detect_leaks=0 ./afl-showmap -m none -q -o .test-instr0 ./test-instr < /dev/null
|
||||
echo 1 | ASAN_OPTIONS=detect_leaks=0 ./afl-showmap -m none -q -o .test-instr1 ./test-instr
|
||||
@rm -f test-instr
|
||||
@ -538,7 +547,7 @@ test_build: afl-cc afl-gcc afl-as afl-showmap
|
||||
# echo 1 | ASAN_OPTIONS=detect_leaks=0 ./afl-showmap -m none -q -o .test-instr1 ./test-instr
|
||||
# @rm -f test-instr
|
||||
# @cmp -s .test-instr0 .test-instr1; DR="$$?"; rm -f .test-instr0 .test-instr1; if [ "$$DR" = "0" ]; then echo; echo "Oops, the instrumentation of afl-gcc does not seem to be behaving correctly!"; \
|
||||
# gcc -v 2>&1 | grep -q -- --with-as= && ( echo; echo "Gcc is configured not to use an external assembler with the -B option."; echo "See docs/INSTALL.md section 5 how to build a -B enabled gcc." ) || \
|
||||
# gcc -v 2>&1 | grep -q -- --with-as= && ( echo; echo "Gcc is configured not to use an external assembler with the -B option." ) || \
|
||||
# ( echo; echo "Please post to https://github.com/AFLplusplus/AFLplusplus/issues to troubleshoot the issue." ); echo; exit 0; fi
|
||||
# @echo
|
||||
# @echo "[+] All right, the instrumentation of afl-gcc seems to be working!"
|
||||
@ -561,67 +570,125 @@ all_done: test_build
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm -f $(PROGS) libradamsa.so afl-fuzz-document afl-as as afl-g++ afl-clang afl-clang++ *.o src/*.o *~ a.out core core.[1-9][0-9]* *.stackdump .test .test1 .test2 test-instr .test-instr0 .test-instr1 afl-qemu-trace afl-gcc-fast afl-gcc-pass.so afl-g++-fast ld *.so *.8 test/unittests/*.o test/unittests/unit_maybe_alloc test/unittests/preallocable .afl-* afl-gcc afl-g++ afl-clang afl-clang++ test/unittests/unit_hash test/unittests/unit_rand
|
||||
rm -rf $(PROGS) afl-fuzz-document afl-as as afl-g++ afl-clang afl-clang++ *.o src/*.o *~ a.out core core.[1-9][0-9]* *.stackdump .test .test1 .test2 test-instr .test-instr0 .test-instr1 afl-cs-proxy afl-qemu-trace afl-gcc-fast afl-g++-fast ld *.so *.8 test/unittests/*.o test/unittests/unit_maybe_alloc test/unittests/preallocable .afl-* afl-gcc afl-g++ afl-clang afl-clang++ test/unittests/unit_hash test/unittests/unit_rand *.dSYM lib*.a
|
||||
-$(MAKE) -f GNUmakefile.llvm clean
|
||||
-$(MAKE) -f GNUmakefile.gcc_plugin clean
|
||||
$(MAKE) -C utils/libdislocator clean
|
||||
$(MAKE) -C utils/libtokencap clean
|
||||
$(MAKE) -C utils/aflpp_driver clean
|
||||
$(MAKE) -C utils/afl_network_proxy clean
|
||||
$(MAKE) -C utils/socket_fuzzing clean
|
||||
$(MAKE) -C utils/argv_fuzzing clean
|
||||
$(MAKE) -C qemu_mode/unsigaction clean
|
||||
$(MAKE) -C qemu_mode/libcompcov clean
|
||||
$(MAKE) -C qemu_mode/libqasan clean
|
||||
-$(MAKE) -C utils/libdislocator clean
|
||||
-$(MAKE) -C utils/libtokencap clean
|
||||
-$(MAKE) -C utils/aflpp_driver clean
|
||||
-$(MAKE) -C utils/afl_network_proxy clean
|
||||
-$(MAKE) -C utils/socket_fuzzing clean
|
||||
-$(MAKE) -C utils/argv_fuzzing clean
|
||||
-$(MAKE) -C utils/plot_ui clean
|
||||
-$(MAKE) -C qemu_mode/unsigaction clean
|
||||
-$(MAKE) -C qemu_mode/libcompcov clean
|
||||
-$(MAKE) -C qemu_mode/libqasan clean
|
||||
-$(MAKE) -C frida_mode clean
|
||||
rm -rf nyx_mode/packer/linux_initramfs/init.cpio.gz nyx_mode/libnyx/libnyx/target/release/* nyx_mode/QEMU-Nyx/x86_64-softmmu/qemu-system-x86_64
|
||||
ifeq "$(IN_REPO)" "1"
|
||||
-test -e coresight_mode/coresight-trace/Makefile && $(MAKE) -C coresight_mode/coresight-trace clean || true
|
||||
-test -e qemu_mode/qemuafl/Makefile && $(MAKE) -C qemu_mode/qemuafl clean || true
|
||||
test -e unicorn_mode/unicornafl/Makefile && $(MAKE) -C unicorn_mode/unicornafl clean || true
|
||||
-test -e unicorn_mode/unicornafl/Makefile && $(MAKE) -C unicorn_mode/unicornafl clean || true
|
||||
-test -e nyx_mode/QEMU-Nyx/Makefile && $(MAKE) -C nyx_mode/QEMU-Nyx clean || true
|
||||
else
|
||||
rm -rf coresight_mode/coresight_trace
|
||||
rm -rf qemu_mode/qemuafl
|
||||
rm -rf unicorn_mode/unicornafl
|
||||
endif
|
||||
|
||||
.PHONY: deepclean
|
||||
deepclean: clean
|
||||
rm -rf coresight_mode/coresight-trace
|
||||
rm -rf unicorn_mode/unicornafl
|
||||
rm -rf qemu_mode/qemuafl
|
||||
rm -rf nyx_mode/libnyx nyx_mode/packer nyx_mode/QEMU-Nyx
|
||||
ifeq "$(IN_REPO)" "1"
|
||||
# NEVER EVER ACTIVATE THAT!!!!! git reset --hard >/dev/null 2>&1 || true
|
||||
git checkout coresight_mode/coresight-trace
|
||||
git checkout unicorn_mode/unicornafl
|
||||
git checkout qemu_mode/qemuafl
|
||||
git checkout nyx_mode/libnyx
|
||||
git checkout nyx_mode/packer
|
||||
git checkout nyx_mode/QEMU-Nyx
|
||||
endif
|
||||
|
||||
.PHONY: distrib
|
||||
distrib: all
|
||||
-$(MAKE) -j -f GNUmakefile.llvm
|
||||
-$(MAKE) -j$(nproc) -f GNUmakefile.llvm
|
||||
ifneq "$(SYS)" "Darwin"
|
||||
-$(MAKE) -f GNUmakefile.gcc_plugin
|
||||
$(MAKE) -C utils/libdislocator
|
||||
$(MAKE) -C utils/libtokencap
|
||||
$(MAKE) -C utils/afl_network_proxy
|
||||
$(MAKE) -C utils/socket_fuzzing
|
||||
$(MAKE) -C utils/argv_fuzzing
|
||||
endif
|
||||
-$(MAKE) -C utils/libdislocator
|
||||
-$(MAKE) -C utils/libtokencap
|
||||
-$(MAKE) -C utils/afl_network_proxy
|
||||
-$(MAKE) -C utils/socket_fuzzing
|
||||
-$(MAKE) -C utils/argv_fuzzing
|
||||
# -$(MAKE) -C utils/plot_ui
|
||||
-$(MAKE) -C frida_mode
|
||||
ifneq "$(SYS)" "Darwin"
|
||||
ifeq "$(ARCH)" "aarch64"
|
||||
ifndef NO_CORESIGHT
|
||||
-$(MAKE) -C coresight_mode
|
||||
endif
|
||||
endif
|
||||
ifeq "$(SYS)" "Linux"
|
||||
ifndef NO_NYX
|
||||
-cd nyx_mode && ./build_nyx_support.sh
|
||||
endif
|
||||
endif
|
||||
-cd qemu_mode && sh ./build_qemu_support.sh
|
||||
ifeq "$(ARCH)" "aarch64"
|
||||
ifndef NO_UNICORN_ARM64
|
||||
-cd unicorn_mode && unset CFLAGS && sh ./build_unicorn_support.sh
|
||||
endif
|
||||
else
|
||||
-cd unicorn_mode && unset CFLAGS && sh ./build_unicorn_support.sh
|
||||
endif
|
||||
endif
|
||||
|
||||
.PHONY: binary-only
|
||||
binary-only: test_shm test_python ready $(PROGS)
|
||||
$(MAKE) -C utils/libdislocator
|
||||
$(MAKE) -C utils/libtokencap
|
||||
$(MAKE) -C utils/afl_network_proxy
|
||||
$(MAKE) -C utils/socket_fuzzing
|
||||
$(MAKE) -C utils/argv_fuzzing
|
||||
-$(MAKE) -C utils/libdislocator
|
||||
-$(MAKE) -C utils/libtokencap
|
||||
-$(MAKE) -C utils/afl_network_proxy
|
||||
-$(MAKE) -C utils/socket_fuzzing
|
||||
-$(MAKE) -C utils/argv_fuzzing
|
||||
# -$(MAKE) -C utils/plot_ui
|
||||
-$(MAKE) -C frida_mode
|
||||
ifneq "$(SYS)" "Darwin"
|
||||
ifeq "$(ARCH)" "aarch64"
|
||||
ifndef NO_CORESIGHT
|
||||
-$(MAKE) -C coresight_mode
|
||||
endif
|
||||
endif
|
||||
ifeq "$(SYS)" "Linux"
|
||||
ifndef NO_NYX
|
||||
-cd nyx_mode && ./build_nyx_support.sh
|
||||
endif
|
||||
endif
|
||||
-cd qemu_mode && sh ./build_qemu_support.sh
|
||||
ifeq "$(ARCH)" "aarch64"
|
||||
ifndef NO_UNICORN_ARM64
|
||||
-cd unicorn_mode && unset CFLAGS && sh ./build_unicorn_support.sh
|
||||
endif
|
||||
else
|
||||
-cd unicorn_mode && unset CFLAGS && sh ./build_unicorn_support.sh
|
||||
endif
|
||||
endif
|
||||
|
||||
.PHONY: source-only
|
||||
source-only: all
|
||||
-$(MAKE) -j -f GNUmakefile.llvm
|
||||
-$(MAKE) -j$(nproc) -f GNUmakefile.llvm
|
||||
ifneq "$(SYS)" "Darwin"
|
||||
-$(MAKE) -f GNUmakefile.gcc_plugin
|
||||
$(MAKE) -C utils/libdislocator
|
||||
$(MAKE) -C utils/libtokencap
|
||||
endif
|
||||
-$(MAKE) -C utils/libdislocator
|
||||
-$(MAKE) -C utils/libtokencap
|
||||
# -$(MAKE) -C utils/plot_ui
|
||||
ifeq "$(SYS)" "Linux"
|
||||
ifndef NO_NYX
|
||||
-cd nyx_mode && ./build_nyx_support.sh
|
||||
endif
|
||||
endif
|
||||
|
||||
%.8: %
|
||||
@echo .TH $* 8 $(BUILD_DATE) "afl++" > $@
|
||||
@ -648,8 +715,10 @@ install: all $(MANPAGES)
|
||||
@rm -f $${DESTDIR}$(BIN_PATH)/afl-plot.sh
|
||||
@rm -f $${DESTDIR}$(BIN_PATH)/afl-as
|
||||
@rm -f $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt.o $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt-32.o $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt-64.o $${DESTDIR}$(HELPER_PATH)/afl-gcc-rt.o
|
||||
@for i in afl-llvm-dict2file.so afl-llvm-lto-instrumentlist.so afl-llvm-pass.so cmplog-instructions-pass.so cmplog-routines-pass.so cmplog-switches-pass.so compare-transform-pass.so libcompcov.so libdislocator.so libnyx.so libqasan.so libtokencap.so SanitizerCoverageLTO.so SanitizerCoveragePCGUARD.so split-compares-pass.so split-switches-pass.so; do echo rm -fv $${DESTDIR}$(HELPER_PATH)/$${i}; done
|
||||
install -m 755 $(PROGS) $(SH_PROGS) $${DESTDIR}$(BIN_PATH)
|
||||
@if [ -f afl-qemu-trace ]; then install -m 755 afl-qemu-trace $${DESTDIR}$(BIN_PATH); fi
|
||||
@if [ -f utils/plot_ui/afl-plot-ui ]; then install -m 755 utils/plot_ui/afl-plot-ui $${DESTDIR}$(BIN_PATH); fi
|
||||
@if [ -f libdislocator.so ]; then set -e; install -m 755 libdislocator.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f libtokencap.so ]; then set -e; install -m 755 libtokencap.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f libcompcov.so ]; then set -e; install -m 755 libcompcov.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@ -658,11 +727,14 @@ install: all $(MANPAGES)
|
||||
@if [ -f socketfuzz32.so -o -f socketfuzz64.so ]; then $(MAKE) -C utils/socket_fuzzing install; fi
|
||||
@if [ -f argvfuzz32.so -o -f argvfuzz64.so ]; then $(MAKE) -C utils/argv_fuzzing install; fi
|
||||
@if [ -f afl-frida-trace.so ]; then install -m 755 afl-frida-trace.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f libnyx.so ]; then install -m 755 libnyx.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f utils/afl_network_proxy/afl-network-server ]; then $(MAKE) -C utils/afl_network_proxy install; fi
|
||||
@if [ -f utils/aflpp_driver/libAFLDriver.a ]; then set -e; install -m 644 utils/aflpp_driver/libAFLDriver.a $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f utils/aflpp_driver/libAFLQemuDriver.a ]; then set -e; install -m 644 utils/aflpp_driver/libAFLQemuDriver.a $${DESTDIR}$(HELPER_PATH); fi
|
||||
-$(MAKE) -f GNUmakefile.llvm install
|
||||
ifneq "$(SYS)" "Darwin"
|
||||
-$(MAKE) -f GNUmakefile.gcc_plugin install
|
||||
endif
|
||||
ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-gcc
|
||||
ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-g++
|
||||
ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang
|
||||
@ -674,3 +746,16 @@ install: all $(MANPAGES)
|
||||
install -m 644 docs/*.md $${DESTDIR}$(DOC_PATH)
|
||||
cp -r testcases/ $${DESTDIR}$(MISC_PATH)
|
||||
cp -r dictionaries/ $${DESTDIR}$(MISC_PATH)
|
||||
|
||||
.PHONY: uninstall
|
||||
uninstall:
|
||||
-cd $${DESTDIR}$(BIN_PATH) && rm -f $(PROGS) $(SH_PROGS) afl-cs-proxy afl-qemu-trace afl-plot-ui afl-fuzz-document afl-network-server afl-g* afl-plot.sh afl-as afl-ld-lto afl-c* afl-lto*
|
||||
-cd $${DESTDIR}$(HELPER_PATH) && rm -f afl-g*.*o afl-llvm-*.*o afl-compiler-*.*o libdislocator.so libtokencap.so libcompcov.so libqasan.so afl-frida-trace.so libnyx.so socketfuzz*.so argvfuzz*.so libAFLDriver.a libAFLQemuDriver.a as afl-as SanitizerCoverage*.so compare-transform-pass.so cmplog-*-pass.so split-*-pass.so dynamic_list.txt
|
||||
-rm -rf $${DESTDIR}$(MISC_PATH)/testcases $${DESTDIR}$(MISC_PATH)/dictionaries
|
||||
-sh -c "ls docs/*.md | sed 's|^docs/|$${DESTDIR}$(DOC_PATH)/|' | xargs rm -f"
|
||||
-cd $${DESTDIR}$(MAN_PATH) && rm -f $(MANPAGES)
|
||||
-rmdir $${DESTDIR}$(BIN_PATH) 2>/dev/null
|
||||
-rmdir $${DESTDIR}$(HELPER_PATH) 2>/dev/null
|
||||
-rmdir $${DESTDIR}$(MISC_PATH) 2>/dev/null
|
||||
-rmdir $${DESTDIR}$(DOC_PATH) 2>/dev/null
|
||||
-rmdir $${DESTDIR}$(MAN_PATH) 2>/dev/null
|
||||
|
@ -11,13 +11,13 @@
|
||||
# from Laszlo Szekeres.
|
||||
#
|
||||
# Copyright 2015 Google Inc. All rights reserved.
|
||||
# Copyright 2019-2020 AFLplusplus Project. All rights reserved.
|
||||
# Copyright 2019-2022 AFLplusplus Project. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at:
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
#TEST_MMAP=1
|
||||
PREFIX ?= /usr/local
|
||||
@ -100,7 +100,9 @@ ifeq "$(SYS)" "SunOS"
|
||||
endif
|
||||
|
||||
|
||||
PROGS = ./afl-gcc-pass.so ./afl-compiler-rt.o ./afl-compiler-rt-32.o ./afl-compiler-rt-64.o
|
||||
PASSES = ./afl-gcc-pass.so ./afl-gcc-cmplog-pass.so ./afl-gcc-cmptrs-pass.so
|
||||
|
||||
PROGS = $(PASSES) ./afl-compiler-rt.o ./afl-compiler-rt-32.o ./afl-compiler-rt-64.o
|
||||
|
||||
.PHONY: all
|
||||
all: test_shm test_deps $(PROGS) test_build all_done
|
||||
@ -135,11 +137,13 @@ afl-common.o: ./src/afl-common.c
|
||||
|
||||
./afl-compiler-rt-32.o: instrumentation/afl-compiler-rt.o.c
|
||||
@printf "[*] Building 32-bit variant of the runtime (-m32)... "
|
||||
@$(CC) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m32 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; ln -sf afl-compiler-rt-32.o afl-llvm-rt-32.o; else echo "failed (that's fine)"; fi
|
||||
@$(CC) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m32 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; else echo "failed (that's fine)"; fi
|
||||
|
||||
./afl-compiler-rt-64.o: instrumentation/afl-compiler-rt.o.c
|
||||
@printf "[*] Building 64-bit variant of the runtime (-m64)... "
|
||||
@$(CC) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m64 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; ln -sf afl-compiler-rt-64.o afl-llvm-rt-64.o; else echo "failed (that's fine)"; fi
|
||||
@$(CC) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m64 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; else echo "failed (that's fine)"; fi
|
||||
|
||||
$(PASSES): instrumentation/afl-gcc-common.h
|
||||
|
||||
./afl-gcc-pass.so: instrumentation/afl-gcc-pass.so.cc | test_deps
|
||||
$(CXX) $(CXXEFLAGS) $(PLUGIN_FLAGS) -shared $< -o $@
|
||||
@ -148,6 +152,12 @@ afl-common.o: ./src/afl-common.c
|
||||
ln -sf afl-cc.8 afl-gcc-fast.8
|
||||
ln -sf afl-cc.8 afl-g++-fast.8
|
||||
|
||||
./afl-gcc-cmplog-pass.so: instrumentation/afl-gcc-cmplog-pass.so.cc | test_deps
|
||||
$(CXX) $(CXXEFLAGS) $(PLUGIN_FLAGS) -shared $< -o $@
|
||||
|
||||
./afl-gcc-cmptrs-pass.so: instrumentation/afl-gcc-cmptrs-pass.so.cc | test_deps
|
||||
$(CXX) $(CXXEFLAGS) $(PLUGIN_FLAGS) -shared $< -o $@
|
||||
|
||||
.PHONY: test_build
|
||||
test_build: $(PROGS)
|
||||
@echo "[*] Testing the CC wrapper and instrumentation output..."
|
||||
@ -190,6 +200,8 @@ install: all
|
||||
ln -sf afl-c++ $${DESTDIR}$(BIN_PATH)/afl-g++-fast
|
||||
ln -sf afl-compiler-rt.o $${DESTDIR}$(HELPER_PATH)/afl-gcc-rt.o
|
||||
install -m 755 ./afl-gcc-pass.so $${DESTDIR}$(HELPER_PATH)
|
||||
install -m 755 ./afl-gcc-cmplog-pass.so $${DESTDIR}$(HELPER_PATH)
|
||||
install -m 755 ./afl-gcc-cmptrs-pass.so $${DESTDIR}$(HELPER_PATH)
|
||||
install -m 644 -T instrumentation/README.gcc_plugin.md $${DESTDIR}$(DOC_PATH)/README.gcc_plugin.md
|
||||
|
||||
.PHONY: clean
|
||||
|
@ -12,7 +12,7 @@
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at:
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
|
||||
# For Heiko:
|
||||
@ -36,7 +36,7 @@ ifeq "$(SYS)" "OpenBSD"
|
||||
LLVM_CONFIG ?= $(BIN_PATH)/llvm-config
|
||||
HAS_OPT = $(shell test -x $(BIN_PATH)/opt && echo 0 || echo 1)
|
||||
ifeq "$(HAS_OPT)" "1"
|
||||
$(warning llvm_mode needs a complete llvm installation (versions 6.0 up to 12) -> e.g. "pkg_add llvm-7.0.1p9")
|
||||
$(warning llvm_mode needs a complete llvm installation (versions 6.0 up to 13) -> e.g. "pkg_add llvm-7.0.1p9")
|
||||
endif
|
||||
else
|
||||
LLVM_CONFIG ?= llvm-config
|
||||
@ -46,14 +46,14 @@ LLVMVER = $(shell $(LLVM_CONFIG) --version 2>/dev/null | sed 's/git//' | sed 's
|
||||
LLVM_MAJOR = $(shell $(LLVM_CONFIG) --version 2>/dev/null | sed 's/\..*//' )
|
||||
LLVM_MINOR = $(shell $(LLVM_CONFIG) --version 2>/dev/null | sed 's/.*\.//' | sed 's/git//' | sed 's/svn//' | sed 's/ .*//' )
|
||||
LLVM_UNSUPPORTED = $(shell $(LLVM_CONFIG) --version 2>/dev/null | egrep -q '^[0-2]\.|^3.[0-7]\.' && echo 1 || echo 0 )
|
||||
LLVM_TOO_NEW = $(shell $(LLVM_CONFIG) --version 2>/dev/null | egrep -q '^1[3-9]' && echo 1 || echo 0 )
|
||||
LLVM_TOO_NEW = $(shell $(LLVM_CONFIG) --version 2>/dev/null | egrep -q '^1[5-9]' && echo 1 || echo 0 )
|
||||
LLVM_NEW_API = $(shell $(LLVM_CONFIG) --version 2>/dev/null | egrep -q '^1[0-9]' && echo 1 || echo 0 )
|
||||
LLVM_10_OK = $(shell $(LLVM_CONFIG) --version 2>/dev/null | egrep -q '^1[1-9]|^10\.[1-9]|^10\.0.[1-9]' && echo 1 || echo 0 )
|
||||
LLVM_HAVE_LTO = $(shell $(LLVM_CONFIG) --version 2>/dev/null | egrep -q '^1[1-9]' && echo 1 || echo 0 )
|
||||
LLVM_BINDIR = $(shell $(LLVM_CONFIG) --bindir 2>/dev/null)
|
||||
LLVM_LIBDIR = $(shell $(LLVM_CONFIG) --libdir 2>/dev/null)
|
||||
LLVM_STDCXX = gnu++11
|
||||
LLVM_APPLE_XCODE = $(shell clang -v 2>&1 | grep -q Apple && echo 1 || echo 0)
|
||||
LLVM_APPLE_XCODE = $(shell $(CC) -v 2>&1 | grep -q Apple && echo 1 || echo 0)
|
||||
LLVM_LTO = 0
|
||||
|
||||
ifeq "$(LLVMVER)" ""
|
||||
@ -86,6 +86,12 @@ ifeq "$(LLVM_TOO_OLD)" "1"
|
||||
$(shell sleep 1)
|
||||
endif
|
||||
|
||||
ifeq "$(LLVM_MAJOR)" "15"
|
||||
$(info [!] llvm_mode detected llvm 15, which is currently broken for LTO plugins.)
|
||||
LLVM_LTO = 0
|
||||
LLVM_HAVE_LTO = 0
|
||||
endif
|
||||
|
||||
ifeq "$(LLVM_HAVE_LTO)" "1"
|
||||
$(info [+] llvm_mode detected llvm 11+, enabling afl-lto LTO implementation)
|
||||
LLVM_LTO = 1
|
||||
@ -93,7 +99,7 @@ ifeq "$(LLVM_HAVE_LTO)" "1"
|
||||
endif
|
||||
|
||||
ifeq "$(LLVM_LTO)" "0"
|
||||
$(info [+] llvm_mode detected llvm < 11, afl-lto LTO will not be build.)
|
||||
$(info [+] llvm_mode detected llvm < 11 or llvm 15, afl-lto LTO will not be build.)
|
||||
endif
|
||||
|
||||
ifeq "$(LLVM_APPLE_XCODE)" "1"
|
||||
@ -279,6 +285,8 @@ CLANG_LFL = `$(LLVM_CONFIG) --ldflags` $(LDFLAGS)
|
||||
# User teor2345 reports that this is required to make things work on MacOS X.
|
||||
ifeq "$(SYS)" "Darwin"
|
||||
CLANG_LFL += -Wl,-flat_namespace -Wl,-undefined,suppress
|
||||
override LLVM_HAVE_LTO := 0
|
||||
override LLVM_LTO := 0
|
||||
else
|
||||
CLANG_CPPFL += -Wl,-znodelete
|
||||
endif
|
||||
@ -306,7 +314,7 @@ ifeq "$(TEST_MMAP)" "1"
|
||||
endif
|
||||
|
||||
PROGS_ALWAYS = ./afl-cc ./afl-compiler-rt.o ./afl-compiler-rt-32.o ./afl-compiler-rt-64.o
|
||||
PROGS = $(PROGS_ALWAYS) ./afl-llvm-pass.so ./SanitizerCoveragePCGUARD.so ./split-compares-pass.so ./split-switches-pass.so ./cmplog-routines-pass.so ./cmplog-instructions-pass.so ./cmplog-switches-pass.so ./afl-llvm-dict2file.so ./compare-transform-pass.so ./afl-ld-lto ./afl-llvm-lto-instrumentlist.so ./afl-llvm-lto-instrumentation.so ./SanitizerCoverageLTO.so
|
||||
PROGS = $(PROGS_ALWAYS) ./afl-llvm-pass.so ./SanitizerCoveragePCGUARD.so ./split-compares-pass.so ./split-switches-pass.so ./cmplog-routines-pass.so ./cmplog-instructions-pass.so ./cmplog-switches-pass.so ./afl-llvm-dict2file.so ./compare-transform-pass.so ./afl-ld-lto ./afl-llvm-lto-instrumentlist.so ./SanitizerCoverageLTO.so
|
||||
|
||||
# If prerequisites are not given, warn, do not build anything, and exit with code 0
|
||||
ifeq "$(LLVMVER)" ""
|
||||
@ -388,11 +396,11 @@ instrumentation/afl-llvm-common.o: instrumentation/afl-llvm-common.cc instrument
|
||||
ifeq "$(LLVM_MIN_4_0_1)" "0"
|
||||
$(info [!] N-gram branch coverage instrumentation is not available for llvm version $(LLVMVER))
|
||||
endif
|
||||
$(CXX) $(CLANG_CPPFL) -DLLVMInsTrim_EXPORTS -fno-rtti -fPIC -std=$(LLVM_STDCXX) -shared $< -o $@ $(CLANG_LFL) instrumentation/afl-llvm-common.o
|
||||
$(CXX) $(CLANG_CPPFL) -Wdeprecated -fno-rtti -fPIC -std=$(LLVM_STDCXX) -shared $< -o $@ $(CLANG_LFL) instrumentation/afl-llvm-common.o
|
||||
|
||||
./SanitizerCoveragePCGUARD.so: instrumentation/SanitizerCoveragePCGUARD.so.cc instrumentation/afl-llvm-common.o | test_deps
|
||||
ifeq "$(LLVM_10_OK)" "1"
|
||||
-$(CXX) $(CLANG_CPPFL) -fno-rtti -fPIC -std=$(LLVM_STDCXX) -shared $< -o $@ $(CLANG_LFL) instrumentation/afl-llvm-common.o
|
||||
-$(CXX) $(CLANG_CPPFL) -fno-rtti -fPIC -std=$(LLVM_STDCXX) -shared $< -o $@ $(CLANG_LFL) -Wno-deprecated-copy-dtor -Wdeprecated instrumentation/afl-llvm-common.o
|
||||
endif
|
||||
|
||||
./afl-llvm-lto-instrumentlist.so: instrumentation/afl-llvm-lto-instrumentlist.so.cc instrumentation/afl-llvm-common.o
|
||||
@ -405,12 +413,7 @@ ifeq "$(LLVM_LTO)" "1"
|
||||
$(CC) $(CFLAGS) $(CPPFLAGS) $< -o $@
|
||||
endif
|
||||
|
||||
./SanitizerCoverageLTO.so: instrumentation/SanitizerCoverageLTO.so.cc
|
||||
ifeq "$(LLVM_LTO)" "1"
|
||||
$(CXX) $(CLANG_CPPFL) -Wno-writable-strings -fno-rtti -fPIC -std=$(LLVM_STDCXX) -shared $< -o $@ $(CLANG_LFL) instrumentation/afl-llvm-common.o
|
||||
endif
|
||||
|
||||
./afl-llvm-lto-instrumentation.so: instrumentation/afl-llvm-lto-instrumentation.so.cc instrumentation/afl-llvm-common.o
|
||||
./SanitizerCoverageLTO.so: instrumentation/SanitizerCoverageLTO.so.cc instrumentation/afl-llvm-common.o
|
||||
ifeq "$(LLVM_LTO)" "1"
|
||||
$(CXX) $(CLANG_CPPFL) -Wno-writable-strings -fno-rtti -fPIC -std=$(LLVM_STDCXX) -shared $< -o $@ $(CLANG_LFL) instrumentation/afl-llvm-common.o
|
||||
$(CLANG_BIN) $(CFLAGS_SAFE) $(CPPFLAGS) -Wno-unused-result -O0 $(AFL_CLANG_FLTO) -fPIC -c instrumentation/afl-llvm-rt-lto.o.c -o ./afl-llvm-rt-lto.o
|
||||
@ -450,11 +453,11 @@ document:
|
||||
|
||||
./afl-compiler-rt-32.o: instrumentation/afl-compiler-rt.o.c
|
||||
@printf "[*] Building 32-bit variant of the runtime (-m32)... "
|
||||
@$(CC) $(CLANG_CFL) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m32 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; ln -sf afl-compiler-rt-32.o afl-llvm-rt-32.o; else echo "failed (that's fine)"; fi
|
||||
@$(CC) $(CLANG_CFL) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m32 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; else echo "failed (that's fine)"; fi
|
||||
|
||||
./afl-compiler-rt-64.o: instrumentation/afl-compiler-rt.o.c
|
||||
@printf "[*] Building 64-bit variant of the runtime (-m64)... "
|
||||
@$(CC) $(CLANG_CFL) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m64 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; ln -sf afl-compiler-rt-64.o afl-llvm-rt-64.o; else echo "failed (that's fine)"; fi
|
||||
@$(CC) $(CLANG_CFL) $(CFLAGS_SAFE) $(CPPFLAGS) -O3 -Wno-unused-result -m64 -fPIC -c $< -o $@ 2>/dev/null; if [ "$$?" = "0" ]; then echo "success!"; else echo "failed (that's fine)"; fi
|
||||
|
||||
.PHONY: test_build
|
||||
test_build: $(PROGS)
|
||||
@ -477,11 +480,11 @@ install: all
|
||||
@install -d -m 755 $${DESTDIR}$(BIN_PATH) $${DESTDIR}$(HELPER_PATH) $${DESTDIR}$(DOC_PATH) $${DESTDIR}$(MISC_PATH)
|
||||
@if [ -f ./afl-cc ]; then set -e; install -m 755 ./afl-cc $${DESTDIR}$(BIN_PATH); ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-c++; fi
|
||||
@rm -f $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt*.o $${DESTDIR}$(HELPER_PATH)/afl-gcc-rt*.o
|
||||
@if [ -f ./afl-compiler-rt.o ]; then set -e; install -m 755 ./afl-compiler-rt.o $${DESTDIR}$(HELPER_PATH); ln -sf afl-compiler-rt.o $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt.o ;fi
|
||||
@if [ -f ./afl-lto ]; then set -e; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-lto; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-lto++; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang-lto; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang-lto++; install -m 755 ./afl-llvm-lto-instrumentation.so ./afl-llvm-rt-lto*.o ./afl-llvm-lto-instrumentlist.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f ./afl-compiler-rt.o ]; then set -e; install -m 755 ./afl-compiler-rt.o $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f ./afl-lto ]; then set -e; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-lto; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-lto++; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang-lto; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang-lto++; install -m 755 ./afl-llvm-rt-lto*.o ./afl-llvm-lto-instrumentlist.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f ./afl-ld-lto ]; then set -e; install -m 755 ./afl-ld-lto $${DESTDIR}$(BIN_PATH); fi
|
||||
@if [ -f ./afl-compiler-rt-32.o ]; then set -e; install -m 755 ./afl-compiler-rt-32.o $${DESTDIR}$(HELPER_PATH); ln -sf afl-compiler-rt-32.o $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt-32.o ;fi
|
||||
@if [ -f ./afl-compiler-rt-64.o ]; then set -e; install -m 755 ./afl-compiler-rt-64.o $${DESTDIR}$(HELPER_PATH); ln -sf afl-compiler-rt-64.o $${DESTDIR}$(HELPER_PATH)/afl-llvm-rt-64.o ; fi
|
||||
@if [ -f ./afl-compiler-rt-32.o ]; then set -e; install -m 755 ./afl-compiler-rt-32.o $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f ./afl-compiler-rt-64.o ]; then set -e; install -m 755 ./afl-compiler-rt-64.o $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f ./compare-transform-pass.so ]; then set -e; install -m 755 ./*.so $${DESTDIR}$(HELPER_PATH); fi
|
||||
@if [ -f ./compare-transform-pass.so ]; then set -e; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang-fast ; ln -sf ./afl-c++ $${DESTDIR}$(BIN_PATH)/afl-clang-fast++ ; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang ; ln -sf ./afl-c++ $${DESTDIR}$(BIN_PATH)/afl-clang++ ; fi
|
||||
@if [ -f ./SanitizerCoverageLTO.so ]; then set -e; ln -sf afl-cc $${DESTDIR}$(BIN_PATH)/afl-clang-lto ; ln -sf ./afl-c++ $${DESTDIR}$(BIN_PATH)/afl-clang-lto++ ; fi
|
||||
@ -523,4 +526,4 @@ endif
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm -f *.o *.so *~ a.out core core.[1-9][0-9]* .test2 test-instr .test-instr0 .test-instr1 *.dwo
|
||||
rm -f $(PROGS) afl-common.o ./afl-c++ ./afl-lto ./afl-lto++ ./afl-clang-lto* ./afl-clang-fast* ./afl-clang*.8 ./ld ./afl-ld ./afl-llvm-rt*.o instrumentation/*.o
|
||||
rm -f $(PROGS) afl-common.o ./afl-c++ ./afl-lto ./afl-lto++ ./afl-clang-lto* ./afl-clang-fast* ./afl-clang*.8 ./ld ./afl-ld ./afl-compiler-rt*.o ./afl-llvm-rt*.o instrumentation/*.o
|
||||
|
@ -1 +0,0 @@
|
||||
docs/QuickStartGuide.md
|
36
TODO.md
@ -1,38 +1,36 @@
|
||||
# TODO list for AFL++
|
||||
|
||||
## Roadmap 3.00+
|
||||
## Should
|
||||
|
||||
- makefiles should show provide a build summary success/failure
|
||||
- better documentation for custom mutators
|
||||
- better autodetection of shifting runtime timeout values
|
||||
- Update afl->pending_not_fuzzed for MOpt
|
||||
- put fuzz target in top line of UI
|
||||
- afl-plot to support multiple plot_data
|
||||
- parallel builds for source-only targets
|
||||
- get rid of check_binary, replace with more forkserver communication
|
||||
|
||||
## Maybe
|
||||
|
||||
- forkserver tells afl-fuzz if cmplog is supported and if so enable
|
||||
it by default, with AFL_CMPLOG_NO=1 (?) set to skip?
|
||||
- afl_custom_fuzz_splice_optin()
|
||||
- afl_custom_splice()
|
||||
- better autodetection of shifting runtime timeout values
|
||||
- cmplog: use colorization input for havoc?
|
||||
- parallel builds for source-only targets
|
||||
|
||||
- cmdline option from-to range for mutations
|
||||
|
||||
## Further down the road
|
||||
|
||||
afl-fuzz:
|
||||
- setting min_len/max_len/start_offset/end_offset limits for mutation output
|
||||
|
||||
qemu_mode:
|
||||
QEMU mode/FRIDA mode:
|
||||
- non colliding instrumentation
|
||||
- rename qemu specific envs to AFL_QEMU (AFL_ENTRYPOINT, AFL_CODE_START/END,
|
||||
AFL_COMPCOV_LEVEL?)
|
||||
- add AFL_QEMU_EXITPOINT (maybe multiple?), maybe pointless as we have
|
||||
- add AFL_QEMU_EXITPOINT (maybe multiple?), maybe pointless as there is
|
||||
persistent mode
|
||||
- add/implement AFL_QEMU_INST_LIBLIST and AFL_QEMU_NOINST_PROGRAM
|
||||
- add/implement AFL_QEMU_INST_REGIONS as a list of _START/_END addresses
|
||||
|
||||
|
||||
## Ideas
|
||||
|
||||
- LTO/sancov: write current edge to prev_loc and use that information when
|
||||
using cmplog or __sanitizer_cov_trace_cmp*. maybe we can deduct by follow
|
||||
up edge numbers that both following cmp paths have been found and then
|
||||
disable working on this edge id -> cmplog_intelligence branch
|
||||
using cmplog or __sanitizer_cov_trace_cmp*. maybe we can deduct by follow up
|
||||
edge numbers that both following cmp paths have been found and then disable
|
||||
working on this edge id -> cmplog_intelligence branch
|
||||
- use cmplog colorization taint result for havoc locations?
|
||||
- new instrumentation option for a thread-safe variant of feedback to shared mem.
|
||||
The user decides, if this is needed (eg the target is multithreaded).
|
||||
|
22
afl-cmin
@ -135,6 +135,12 @@ function exists_and_is_executable(binarypath) {
|
||||
}
|
||||
|
||||
BEGIN {
|
||||
if (0 != system( "test -t 1")) {
|
||||
redirected = 1
|
||||
} else {
|
||||
redirected = 0
|
||||
}
|
||||
|
||||
print "corpus minimization tool for afl++ (awk version)\n"
|
||||
|
||||
# defaults
|
||||
@ -217,7 +223,7 @@ BEGIN {
|
||||
for (; Optind < ARGC; Optind++) {
|
||||
prog_args[i++] = ARGV[Optind]
|
||||
if (i > 1)
|
||||
prog_args_string = prog_args_string" "ARGV[Optind]
|
||||
prog_args_string = prog_args_string" '"ARGV[Optind]"'"
|
||||
}
|
||||
|
||||
# sanity checks
|
||||
@ -396,7 +402,7 @@ BEGIN {
|
||||
system( "AFL_CMIN_ALLOW_ANY=1 "AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"/.run_test\" -Z "extra_par" -- \""target_bin"\" "prog_args_string" <\""in_dir"/"first_file"\"")
|
||||
} else {
|
||||
system("cp \""in_dir"/"first_file"\" "stdin_file)
|
||||
system( "AFL_CMIN_ALLOW_ANY=1 "AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"/.run_test\" -Z "extra_par" -A \""stdin_file"\" -- \""target_bin"\" "prog_args_string" </dev/null")
|
||||
system( "AFL_CMIN_ALLOW_ANY=1 "AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"/.run_test\" -Z "extra_par" -H \""stdin_file"\" -- \""target_bin"\" "prog_args_string" </dev/null")
|
||||
}
|
||||
|
||||
first_count = 0
|
||||
@ -432,8 +438,8 @@ BEGIN {
|
||||
retval = system( AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"\" -Z "extra_par" -i \""in_dir"\" -- \""target_bin"\" "prog_args_string)
|
||||
} else {
|
||||
print " Processing "in_count" files (forkserver mode)..."
|
||||
# print AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"\" -Z "extra_par" -i \""in_dir"\" -A \""stdin_file"\" -- \""target_bin"\" "prog_args_string" </dev/null"
|
||||
retval = system( AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"\" -Z "extra_par" -i \""in_dir"\" -A \""stdin_file"\" -- \""target_bin"\" "prog_args_string" </dev/null")
|
||||
# print AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"\" -Z "extra_par" -i \""in_dir"\" -H \""stdin_file"\" -- \""target_bin"\" "prog_args_string" </dev/null"
|
||||
retval = system( AFL_CMIN_CRASHES_ONLY"\""showmap"\" -m "mem_limit" -t "timeout" -o \""trace_dir"\" -Z "extra_par" -i \""in_dir"\" -H \""stdin_file"\" -- \""target_bin"\" "prog_args_string" </dev/null")
|
||||
}
|
||||
|
||||
if (retval && !AFL_CMIN_CRASHES_ONLY) {
|
||||
@ -463,7 +469,8 @@ BEGIN {
|
||||
while (cur < in_count) {
|
||||
fn = infilesSmallToBig[cur]
|
||||
++cur
|
||||
printf "\r Processing file "cur"/"in_count
|
||||
if (redirected == 0) { printf "\r Processing file "cur"/"in_count }
|
||||
else { print " Processing file "cur"/"in_count }
|
||||
# create path for the trace file from afl-showmap
|
||||
tracefile_path = trace_dir"/"fn
|
||||
# gather all keys, and count them
|
||||
@ -502,7 +509,9 @@ BEGIN {
|
||||
key = field[nrFields]
|
||||
|
||||
++tcnt;
|
||||
printf "\r Processing tuple "tcnt"/"tuple_count" with count "key_count[key]"..."
|
||||
if (redirected == 0) { printf "\r Processing tuple "tcnt"/"tuple_count" with count "key_count[key]"..." }
|
||||
else { print " Processing tuple "tcnt"/"tuple_count" with count "key_count[key]"..." }
|
||||
|
||||
if (key in keyAlreadyKnown) {
|
||||
continue
|
||||
}
|
||||
@ -525,7 +534,6 @@ BEGIN {
|
||||
}
|
||||
}
|
||||
close(sortedKeys)
|
||||
print ""
|
||||
print "[+] Found "tuple_count" unique tuples across "in_count" files."
|
||||
|
||||
if (out_count == 1) {
|
||||
|
@ -11,7 +11,7 @@
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at:
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# This tool tries to find the smallest subset of files in the input directory
|
||||
# that still trigger the full range of instrumentation data points seen in
|
||||
@ -310,7 +310,7 @@ if [ "$STDIN_FILE" = "" ]; then
|
||||
else
|
||||
|
||||
cp "$IN_DIR/$FIRST_FILE" "$STDIN_FILE"
|
||||
AFL_CMIN_ALLOW_ANY=1 "$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/.run_test" -Z $EXTRA_PAR -A "$STDIN_FILE" -- "$@" </dev/null
|
||||
AFL_CMIN_ALLOW_ANY=1 "$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/.run_test" -Z $EXTRA_PAR -H "$STDIN_FILE" -- "$@" </dev/null
|
||||
|
||||
fi
|
||||
|
||||
@ -360,7 +360,7 @@ echo "[*] Obtaining traces for input files in '$IN_DIR'..."
|
||||
|
||||
cp "$IN_DIR/$fn" "$STDIN_FILE"
|
||||
|
||||
"$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/$fn" -Z $EXTRA_PAR -A "$STDIN_FILE" -- "$@" </dev/null
|
||||
"$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/$fn" -Z $EXTRA_PAR -H "$STDIN_FILE" -- "$@" </dev/null
|
||||
|
||||
done
|
||||
|
||||
|
133
afl-persistent-config
Executable file
@ -0,0 +1,133 @@
|
||||
#!/bin/bash
|
||||
# written by jhertz
|
||||
#
|
||||
|
||||
test "$1" = "-h" -o "$1" = "-hh" && {
|
||||
echo 'afl-persistent-config'
|
||||
echo
|
||||
echo $0
|
||||
echo
|
||||
echo afl-persistent-config has no command line options
|
||||
echo
|
||||
echo afl-persistent-config permanently reconfigures the system to a high performance fuzzing state.
|
||||
echo "WARNING: this reduces the security of the system!"
|
||||
echo
|
||||
echo Note that there is also afl-system-config which sets additional runtime
|
||||
echo configuration options.
|
||||
exit 0
|
||||
}
|
||||
|
||||
echo
|
||||
echo "WARNING: This scripts makes permanent configuration changes to the system to"
|
||||
echo " increase the performance for fuzzing. As a result, the system also"
|
||||
echo " becomes less secure against attacks! If you use this script, setup"
|
||||
echo " strong firewall rules and only make SSH available as a network"
|
||||
echo " service!"
|
||||
echo
|
||||
echo -n "Type \"YES\" to continue: "
|
||||
read ANSWER
|
||||
if [[ "$ANSWER" != "YES" ]]; then
|
||||
echo Input was not YES, aborting ...
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
PLATFORM=`uname -s`
|
||||
|
||||
# check that we're on Mac
|
||||
if [[ "$PLATFORM" = "Darwin" ]] ; then
|
||||
|
||||
# check if UID == 0
|
||||
if [[ "$EUID" -ne 0 ]]; then
|
||||
echo "You need to be root to do this. E.g. use \"sudo\""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# check if SIP is disabled
|
||||
if [[ ! $(csrutil status | grep "disabled") ]]; then
|
||||
echo "SIP needs to be disabled. Restart and press Command-R at reboot, Utilities => Terminal => enter \"csrutil disable\""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Checks passed."
|
||||
|
||||
echo "Installing /Library/LaunchDaemons/shm_setup.plist"
|
||||
|
||||
cat << EOF > /Library/LaunchDaemons/shm_setup.plist
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>shmemsetup</string>
|
||||
<key>UserName</key>
|
||||
<string>root</string>
|
||||
<key>GroupName</key>
|
||||
<string>wheel</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>/usr/sbin/sysctl</string>
|
||||
<string>-w</string>
|
||||
<string>kern.sysv.shmmax=524288000</string>
|
||||
<string>kern.sysv.shmmin=1</string>
|
||||
<string>kern.sysv.shmmni=128</string>
|
||||
<string>kern.sysv.shmseg=48</string>
|
||||
<string>kern.sysv.shmall=131072000</string>
|
||||
</array>
|
||||
<key>KeepAlive</key>
|
||||
<false/>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</plist>
|
||||
EOF
|
||||
|
||||
echo
|
||||
echo "Reboot and enjoy your fuzzing"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ "$PLATFORM" = "Linux" ]] ; then
|
||||
|
||||
# check if UID == 0
|
||||
if [[ "$EUID" -ne 0 ]]; then
|
||||
echo "You need to be root to do this. E.g. use \"sudo\""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Checks passed."
|
||||
|
||||
test -d /etc/sysctl.d || echo Error: /etc/sysctl.d directory not found, cannot install shmem config
|
||||
test -d /etc/sysctl.d -a '!' -e /etc/sysctl.d/99-fuzzing && {
|
||||
echo "Installing /etc/sysctl.d/99-fuzzing"
|
||||
cat << EOF > /etc/sysctl.d/99-fuzzing
|
||||
kernel.core_uses_pid=0
|
||||
kernel.core_pattern=core
|
||||
kernel.randomize_va_space=0
|
||||
kernel.sched_child_runs_first=1
|
||||
kernel.sched_autogroup_enabled=1
|
||||
kernel.sched_migration_cost_ns=50000000
|
||||
kernel.sched_latency_ns=250000000
|
||||
EOF
|
||||
}
|
||||
|
||||
egrep -q '^GRUB_CMDLINE_LINUX_DEFAULT=' /etc/default/grub 2>/dev/null || echo Error: /etc/default/grub with GRUB_CMDLINE_LINUX_DEFAULT is not present, cannot set boot options
|
||||
egrep -q '^GRUB_CMDLINE_LINUX_DEFAULT=' /etc/default/grub 2>/dev/null && {
|
||||
egrep '^GRUB_CMDLINE_LINUX_DEFAULT=' /etc/default/grub | egrep -q hardened_usercopy=off || {
|
||||
echo "Configuring performance boot options"
|
||||
LINE=`egrep '^GRUB_CMDLINE_LINUX_DEFAULT=' /etc/default/grub | sed 's/^GRUB_CMDLINE_LINUX_DEFAULT=//' | tr -d '"'`
|
||||
OPTIONS="$LINE ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off srbds=off noexec=off noexec32=off tsx=on tsx=on tsx_async_abort=off mitigations=off audit=0 hardened_usercopy=off ssbd=force-off"
|
||||
echo Setting boot options in /etc/default/grub to GRUB_CMDLINE_LINUX_DEFAULT=\"$OPTIONS\"
|
||||
sed -i "s|^GRUB_CMDLINE_LINUX_DEFAULT=.*|GRUB_CMDLINE_LINUX_DEFAULT=\"$OPTIONS\"|" /etc/default/grub
|
||||
}
|
||||
}
|
||||
|
||||
echo
|
||||
echo "Reboot and enjoy your fuzzing"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
|
||||
|
||||
echo "Error: Unknown platform \"$PLATFORM\", currently supported are Linux and MacOS."
|
||||
exit 1
|
179
afl-plot
@ -12,7 +12,7 @@
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at:
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
|
||||
get_abs_path() {
|
||||
@ -22,16 +22,28 @@ get_abs_path() {
|
||||
echo "progress plotting utility for afl-fuzz by Michal Zalewski"
|
||||
echo
|
||||
|
||||
if [ ! "$#" = "2" ]; then
|
||||
GRAPHICAL="0"
|
||||
|
||||
if [ "$1" = "-g" ] || [ "$1" = "--graphical" ]; then
|
||||
GRAPHICAL="1"
|
||||
shift
|
||||
fi
|
||||
|
||||
if [ "$#" != "2" ]; then
|
||||
|
||||
cat 1>&2 <<_EOF_
|
||||
$0 afl_state_dir graph_output_dir
|
||||
$0 [ -g | --graphical ] afl_state_dir graph_output_dir
|
||||
|
||||
This program generates gnuplot images from afl-fuzz output data. Usage:
|
||||
This program generates gnuplot images from afl-fuzz output data.
|
||||
|
||||
The afl_state_dir parameter should point to an existing state directory for any
|
||||
active or stopped instance of afl-fuzz; while graph_output_dir should point to
|
||||
an empty directory where this tool can write the resulting plots to.
|
||||
Usage:
|
||||
|
||||
afl_state_dir should point to an existing state directory for any
|
||||
active or stopped instance of afl-fuzz
|
||||
graph_output_dir should point to an empty directory where this
|
||||
tool can write the resulting plots to
|
||||
-g, --graphical (optional) display the plots in a graphical window
|
||||
(you should have built afl-plot-ui to use this option)
|
||||
|
||||
The program will put index.html and three PNG images in the output directory;
|
||||
you should be able to view it with any web browser of your choice.
|
||||
@ -102,18 +114,10 @@ fi
|
||||
rm -f "$outputdir/high_freq.png" "$outputdir/low_freq.png" "$outputdir/exec_speed.png" "$outputdir/edges.png"
|
||||
mv -f "$outputdir/index.html" "$outputdir/index.html.orig" 2>/dev/null
|
||||
|
||||
echo "[*] Generating plots..."
|
||||
|
||||
(
|
||||
|
||||
cat <<_EOF_
|
||||
set terminal png truecolor enhanced size 1000,300 butt
|
||||
|
||||
set output '$outputdir/high_freq.png'
|
||||
|
||||
GNUPLOT_SETUP="
|
||||
#set xdata time
|
||||
#set timefmt '%s'
|
||||
#set format x "%b %d\n%H:%M"
|
||||
#set format x \"%b %d\n%H:%M\"
|
||||
set tics font 'small'
|
||||
unset mxtics
|
||||
unset mytics
|
||||
@ -127,36 +131,167 @@ set key outside
|
||||
set autoscale xfixmin
|
||||
set autoscale xfixmax
|
||||
|
||||
set xlabel "relative time in seconds" font "small"
|
||||
set xlabel \"relative time in seconds\" font \"small\"
|
||||
"
|
||||
|
||||
plot '$inputdir/plot_data' using 1:4 with filledcurve x1 title 'total paths' linecolor rgb '#000000' fillstyle transparent solid 0.2 noborder, \\
|
||||
'' using 1:3 with filledcurve x1 title 'current path' linecolor rgb '#f0f0f0' fillstyle transparent solid 0.5 noborder, \\
|
||||
'' using 1:5 with lines title 'pending paths' linecolor rgb '#0090ff' linewidth 3, \\
|
||||
PLOT_HF="
|
||||
set terminal png truecolor enhanced size 1000,300 butt
|
||||
set output '$outputdir/high_freq.png'
|
||||
|
||||
$GNUPLOT_SETUP
|
||||
|
||||
plot '$inputdir/plot_data' using 1:4 with filledcurve x1 title 'corpus count' linecolor rgb '#000000' fillstyle transparent solid 0.2 noborder, \\
|
||||
'' using 1:3 with filledcurve x1 title 'current fuzz item' linecolor rgb '#f0f0f0' fillstyle transparent solid 0.5 noborder, \\
|
||||
'' using 1:5 with lines title 'pending items' linecolor rgb '#0090ff' linewidth 3, \\
|
||||
'' using 1:6 with lines title 'pending favs' linecolor rgb '#c00080' linewidth 3, \\
|
||||
'' using 1:2 with lines title 'cycles done' linecolor rgb '#c000f0' linewidth 3
|
||||
"
|
||||
|
||||
PLOT_LF="
|
||||
set terminal png truecolor enhanced size 1000,200 butt
|
||||
set output '$outputdir/low_freq.png'
|
||||
|
||||
$GNUPLOT_SETUP
|
||||
|
||||
plot '$inputdir/plot_data' using 1:8 with filledcurve x1 title '' linecolor rgb '#c00080' fillstyle transparent solid 0.2 noborder, \\
|
||||
'' using 1:8 with lines title ' uniq crashes' linecolor rgb '#c00080' linewidth 3, \\
|
||||
'' using 1:9 with lines title 'uniq hangs' linecolor rgb '#c000f0' linewidth 3, \\
|
||||
'' using 1:10 with lines title 'levels' linecolor rgb '#0090ff' linewidth 3
|
||||
"
|
||||
|
||||
PLOT_ES="
|
||||
set terminal png truecolor enhanced size 1000,200 butt
|
||||
set output '$outputdir/exec_speed.png'
|
||||
|
||||
$GNUPLOT_SETUP
|
||||
|
||||
plot '$inputdir/plot_data' using 1:11 with filledcurve x1 title '' linecolor rgb '#0090ff' fillstyle transparent solid 0.2 noborder, \\
|
||||
'$inputdir/plot_data' using 1:11 with lines title ' execs/sec' linecolor rgb '#0090ff' linewidth 3 smooth bezier;
|
||||
"
|
||||
|
||||
PLOT_EG="
|
||||
set terminal png truecolor enhanced size 1000,300 butt
|
||||
set output '$outputdir/edges.png'
|
||||
|
||||
$GNUPLOT_SETUP
|
||||
|
||||
plot '$inputdir/plot_data' using 1:13 with lines title ' edges' linecolor rgb '#0090ff' linewidth 3
|
||||
"
|
||||
|
||||
if [ "$#" = "2" ] && [ "$GRAPHICAL" = "1" ]; then
|
||||
|
||||
afl-plot-ui -h > /dev/null 2>&1
|
||||
|
||||
if [ "$?" != "0" ]; then
|
||||
|
||||
cat 1>&2 <<_EOF_
|
||||
You do not seem to have the afl-plot-ui utility installed. If you have installed afl-plot-ui, make sure the afl-plot-ui executable is in your PATH.
|
||||
If you are still facing any problems, please open an issue at https://github.com/AFLplusplus/AFLplusplus/issues.
|
||||
|
||||
No plots have been generated. Please rerun without the "-g" or "--graphical" flag to generate the plots.
|
||||
_EOF_
|
||||
|
||||
exit 1
|
||||
|
||||
fi
|
||||
|
||||
rm -rf "$outputdir/.tmp"
|
||||
mkdir -p "$outputdir/.tmp"
|
||||
mkfifo "$outputdir/.tmp/win_ids" || exit 1
|
||||
|
||||
afl-plot-ui > "$outputdir/.tmp/win_ids" &
|
||||
W_IDS=$(cat "$outputdir/.tmp/win_ids")
|
||||
|
||||
rm -rf "$outputdir/.tmp"
|
||||
|
||||
W_ID1=$(echo "$W_IDS" | head -n 1)
|
||||
W_ID2=$(echo "$W_IDS" | head -n 2 | tail -n 1)
|
||||
W_ID3=$(echo "$W_IDS" | head -n 3 | tail -n 1)
|
||||
W_ID4=$(echo "$W_IDS" | tail -n 1)
|
||||
|
||||
echo "[*] Generating plots..."
|
||||
|
||||
(
|
||||
|
||||
cat << _EOF_
|
||||
|
||||
$PLOT_HF
|
||||
set term x11 window "$W_ID3"
|
||||
set output
|
||||
replot
|
||||
pause mouse close
|
||||
|
||||
_EOF_
|
||||
|
||||
) | gnuplot
|
||||
) | gnuplot 2> /dev/null &
|
||||
|
||||
(
|
||||
|
||||
cat << _EOF_
|
||||
|
||||
$PLOT_LF
|
||||
set term x11 window "$W_ID4"
|
||||
set output
|
||||
replot
|
||||
pause mouse close
|
||||
|
||||
_EOF_
|
||||
|
||||
) | gnuplot 2> /dev/null &
|
||||
|
||||
(
|
||||
|
||||
cat << _EOF_
|
||||
|
||||
$PLOT_ES
|
||||
set term x11 window "$W_ID2"
|
||||
set output
|
||||
replot
|
||||
pause mouse close
|
||||
|
||||
_EOF_
|
||||
|
||||
) | gnuplot 2> /dev/null &
|
||||
|
||||
(
|
||||
|
||||
cat << _EOF_
|
||||
|
||||
$PLOT_EG
|
||||
set term x11 window "$W_ID1"
|
||||
set output
|
||||
replot
|
||||
pause mouse close
|
||||
|
||||
_EOF_
|
||||
|
||||
) | gnuplot 2> /dev/null &
|
||||
|
||||
sleep 1
|
||||
|
||||
else
|
||||
|
||||
echo "[*] Generating plots..."
|
||||
|
||||
(
|
||||
|
||||
cat << _EOF_
|
||||
|
||||
$PLOT_HF
|
||||
|
||||
$PLOT_LF
|
||||
|
||||
$PLOT_ES
|
||||
|
||||
$PLOT_EG
|
||||
|
||||
_EOF_
|
||||
|
||||
) | gnuplot
|
||||
|
||||
echo "[?] You can also use -g flag to view the plots in an GUI window, and interact with the plots (if you have built afl-plot-ui). Run \"afl-plot-h\" to know more."
|
||||
|
||||
fi
|
||||
|
||||
if [ ! -s "$outputdir/exec_speed.png" ]; then
|
||||
|
||||
|
@ -6,10 +6,12 @@ test "$1" = "-h" -o "$1" = "-hh" && {
|
||||
echo
|
||||
echo afl-system-config has no command line options
|
||||
echo
|
||||
echo afl-system reconfigures the system to a high performance fuzzing state
|
||||
echo afl-system-config reconfigures the system to a high performance fuzzing state.
|
||||
echo "WARNING: this reduces the security of the system!"
|
||||
echo
|
||||
exit 1
|
||||
echo Note that there is also afl-persistent-config which sets additional permanent
|
||||
echo configuration options.
|
||||
exit 0
|
||||
}
|
||||
|
||||
DONE=
|
||||
@ -32,8 +34,8 @@ if [ "$PLATFORM" = "Linux" ] ; then
|
||||
sysctl -w kernel.randomize_va_space=0
|
||||
sysctl -w kernel.sched_child_runs_first=1
|
||||
sysctl -w kernel.sched_autogroup_enabled=1
|
||||
sysctl -w kernel.sched_migration_cost_ns=50000000
|
||||
sysctl -w kernel.sched_latency_ns=250000000
|
||||
sysctl -w kernel.sched_migration_cost_ns=50000000 2>/dev/null
|
||||
sysctl -w kernel.sched_latency_ns=250000000 2>/dev/null
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
test -e /sys/devices/system/cpu/cpufreq/scaling_governor && echo performance | tee /sys/devices/system/cpu/cpufreq/scaling_governor
|
||||
test -e /sys/devices/system/cpu/cpufreq/policy0/scaling_governor && echo performance | tee /sys/devices/system/cpu/cpufreq/policy*/scaling_governor
|
||||
@ -50,7 +52,7 @@ if [ "$PLATFORM" = "Linux" ] ; then
|
||||
echo ' /etc/default/grub:GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=0 l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off srbds=off noexec=off noexec32=off tsx=on tsx_async_abort=off arm64.nopauth audit=0 hardened_usercopy=off ssbd=force-off"'
|
||||
echo
|
||||
}
|
||||
echo If you run fuzzing instances in docker, run them with \"--security-opt seccomp=unconfined\" for more speed
|
||||
echo If you run fuzzing instances in docker, run them with \"--security-opt seccomp=unconfined\" for more speed.
|
||||
echo
|
||||
DONE=1
|
||||
fi
|
||||
@ -74,6 +76,9 @@ EOF
|
||||
DONE=1
|
||||
fi
|
||||
if [ "$PLATFORM" = "OpenBSD" ] ; then
|
||||
doas sysctl vm.malloc_conf=
|
||||
echo 'Freecheck on allocation in particular can be detrimental to performance.'
|
||||
echo 'Also we might not want necessarily to abort at any allocation failure.'
|
||||
echo 'System security features cannot be disabled on OpenBSD.'
|
||||
echo
|
||||
DONE=1
|
||||
@ -99,9 +104,10 @@ if [ "$PLATFORM" = "NetBSD" ] ; then
|
||||
DONE=1
|
||||
fi
|
||||
if [ "$PLATFORM" = "Darwin" ] ; then
|
||||
sysctl kern.sysv.shmmax=8388608
|
||||
sysctl kern.sysv.shmmax=524288000
|
||||
sysctl kern.sysv.shmmin=1
|
||||
sysctl kern.sysv.shmseg=48
|
||||
sysctl kern.sysv.shmall=98304
|
||||
sysctl kern.sysv.shmall=131072000
|
||||
echo Settings applied.
|
||||
echo
|
||||
if [ $(launchctl list 2>/dev/null | grep -q '\.ReportCrash$') ] ; then
|
||||
@ -112,7 +118,7 @@ if [ "$PLATFORM" = "Darwin" ] ; then
|
||||
sudo launchctl unload -w ${SL}/LaunchDaemons/${PL}.Root.plist >/dev/null 2>&1
|
||||
echo
|
||||
fi
|
||||
echo It is recommended to disable System Integration Protection for increased performance.
|
||||
echo It is recommended to disable System Integrity Protection for increased performance.
|
||||
echo
|
||||
DONE=1
|
||||
fi
|
||||
|
40
afl-whatsup
@ -6,13 +6,13 @@
|
||||
# Originally written by Michal Zalewski
|
||||
#
|
||||
# Copyright 2015 Google Inc. All rights reserved.
|
||||
# Copyright 2019-2020 AFLplusplus Project. All rights reserved.
|
||||
# Copyright 2019-2022 AFLplusplus Project. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at:
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# This tool summarizes the status of any locally-running synchronized
|
||||
# instances of afl-fuzz.
|
||||
@ -91,9 +91,9 @@ TOTAL_CRASHES=0
|
||||
TOTAL_PFAV=0
|
||||
TOTAL_PENDING=0
|
||||
|
||||
# Time since last path / crash / hang, formatted as string
|
||||
# Time since last find / crash / hang, formatted as string
|
||||
FMT_TIME="0 days 0 hours"
|
||||
FMT_PATH="${RED}none seen yet${NC}"
|
||||
FMT_FIND="${RED}none seen yet${NC}"
|
||||
FMT_CRASH="none seen yet"
|
||||
FMT_HANG="none seen yet"
|
||||
|
||||
@ -135,7 +135,7 @@ fmt_duration()
|
||||
|
||||
FIRST=true
|
||||
TOTAL_WCOP=
|
||||
TOTAL_LAST_PATH=0
|
||||
TOTAL_LAST_FIND=0
|
||||
|
||||
for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
|
||||
@ -169,7 +169,7 @@ for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
fi
|
||||
|
||||
DEAD_CNT=$((DEAD_CNT + 1))
|
||||
last_path=0
|
||||
last_find=0
|
||||
|
||||
if [ "$PROCESS_DEAD" = "" ]; then
|
||||
|
||||
@ -183,17 +183,17 @@ for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
|
||||
EXEC_SEC=0
|
||||
test -z "$RUN_UNIX" -o "$RUN_UNIX" = 0 || EXEC_SEC=$((execs_done / RUN_UNIX))
|
||||
PATH_PERC=$((cur_path * 100 / paths_total))
|
||||
PATH_PERC=$((cur_item * 100 / corpus_count))
|
||||
|
||||
TOTAL_TIME=$((TOTAL_TIME + RUN_UNIX))
|
||||
TOTAL_EPS=$((TOTAL_EPS + EXEC_SEC))
|
||||
TOTAL_EXECS=$((TOTAL_EXECS + execs_done))
|
||||
TOTAL_CRASHES=$((TOTAL_CRASHES + unique_crashes))
|
||||
TOTAL_CRASHES=$((TOTAL_CRASHES + saved_crashes))
|
||||
TOTAL_PENDING=$((TOTAL_PENDING + pending_total))
|
||||
TOTAL_PFAV=$((TOTAL_PFAV + pending_favs))
|
||||
|
||||
if [ "$last_path" -gt "$TOTAL_LAST_PATH" ]; then
|
||||
TOTAL_LAST_PATH=$last_path
|
||||
if [ "$last_find" -gt "$TOTAL_LAST_FIND" ]; then
|
||||
TOTAL_LAST_FIND=$last_find
|
||||
fi
|
||||
|
||||
if [ "$SUMMARY_ONLY" = "" ]; then
|
||||
@ -210,7 +210,7 @@ for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
echo " ${RED}slow execution, $EXEC_SEC execs/sec${NC}"
|
||||
fi
|
||||
|
||||
fmt_duration $last_path && FMT_PATH=$DUR_STRING
|
||||
fmt_duration $last_find && FMT_FIND=$DUR_STRING
|
||||
fmt_duration $last_crash && FMT_CRASH=$DUR_STRING
|
||||
fmt_duration $last_hang && FMT_HANG=$DUR_STRING
|
||||
FMT_CWOP="not available"
|
||||
@ -220,7 +220,7 @@ for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
test "$cycles_wo_finds" -gt 50 && FMT_CWOP="${RED}$cycles_wo_finds${NC}"
|
||||
}
|
||||
|
||||
echo " last_path : $FMT_PATH"
|
||||
echo " last_find : $FMT_FIND"
|
||||
echo " last_crash : $FMT_CRASH"
|
||||
echo " last_hang : $FMT_HANG"
|
||||
echo " cycles_wo_finds : $FMT_CWOP"
|
||||
@ -229,12 +229,12 @@ for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
MEM_USAGE=$(ps aux | grep $fuzzer_pid | grep -v grep | awk '{print $4}')
|
||||
|
||||
echo " cpu usage $CPU_USAGE%, memory usage $MEM_USAGE%"
|
||||
echo " cycle $((cycles_done + 1)), lifetime speed $EXEC_SEC execs/sec, path $cur_path/$paths_total (${PATH_PERC}%)"
|
||||
echo " cycles $((cycles_done + 1)), lifetime speed $EXEC_SEC execs/sec, items $cur_item/$corpus_count (${PATH_PERC}%)"
|
||||
|
||||
if [ "$unique_crashes" = "0" ]; then
|
||||
if [ "$saved_crashes" = "0" ]; then
|
||||
echo " pending $pending_favs/$pending_total, coverage $bitmap_cvg, no crashes yet"
|
||||
else
|
||||
echo " pending $pending_favs/$pending_total, coverage $bitmap_cvg, crash count $unique_crashes (!)"
|
||||
echo " pending $pending_favs/$pending_total, coverage $bitmap_cvg, crashes saved $saved_crashes (!)"
|
||||
fi
|
||||
|
||||
echo
|
||||
@ -243,7 +243,7 @@ for i in `find . -maxdepth 2 -iname fuzzer_stats | sort`; do
|
||||
|
||||
done
|
||||
|
||||
# Formatting for total time, time since last path, crash, and hang
|
||||
# Formatting for total time, time since last find, crash, and hang
|
||||
fmt_duration $((CUR_TIME - TOTAL_TIME)) && FMT_TIME=$DUR_STRING
|
||||
# Formatting for total execution
|
||||
FMT_EXECS="0 millions"
|
||||
@ -263,7 +263,7 @@ TOTAL_DAYS=$((TOTAL_TIME / 60 / 60 / 24))
|
||||
TOTAL_HRS=$(((TOTAL_TIME / 60 / 60) % 24))
|
||||
|
||||
test -z "$TOTAL_WCOP" && TOTAL_WCOP="not available"
|
||||
fmt_duration $TOTAL_LAST_PATH && TOTAL_LAST_PATH=$DUR_STRING
|
||||
fmt_duration $TOTAL_LAST_FIND && TOTAL_LAST_FIND=$DUR_STRING
|
||||
|
||||
test "$TOTAL_TIME" = "0" && TOTAL_TIME=1
|
||||
|
||||
@ -293,15 +293,15 @@ echo " Cumulative speed : $TOTAL_EPS execs/sec"
|
||||
if [ "$ALIVE_CNT" -gt "0" ]; then
|
||||
echo " Average speed : $((TOTAL_EPS / ALIVE_CNT)) execs/sec"
|
||||
fi
|
||||
echo " Pending paths : $TOTAL_PFAV faves, $TOTAL_PENDING total"
|
||||
echo " Pending items : $TOTAL_PFAV faves, $TOTAL_PENDING total"
|
||||
|
||||
if [ "$ALIVE_CNT" -gt "1" ]; then
|
||||
echo " Pending per fuzzer : $((TOTAL_PFAV/ALIVE_CNT)) faves, $((TOTAL_PENDING/ALIVE_CNT)) total (on average)"
|
||||
fi
|
||||
|
||||
echo " Crashes found : $TOTAL_CRASHES locally unique"
|
||||
echo " Crashes saved : $TOTAL_CRASHES"
|
||||
echo "Cycles without finds : $TOTAL_WCOP"
|
||||
echo " Time without finds : $TOTAL_LAST_PATH"
|
||||
echo " Time without finds : $TOTAL_LAST_FIND"
|
||||
echo
|
||||
|
||||
exit 0
|
||||
|
2
coresight_mode/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
.local
|
||||
glibc*
|
62
coresight_mode/GNUmakefile
Normal file
@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env make
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
# Copyright 2021 Ricerca Security, Inc. All rights reserved.
|
||||
|
||||
SHELL:=bash
|
||||
PREFIX?=$(shell pwd)/.local
|
||||
|
||||
CS_TRACE:=coresight-trace
|
||||
|
||||
PATCHELF?=$(PREFIX)/bin/patchelf
|
||||
|
||||
PATCH_DIR:=patches
|
||||
|
||||
GLIBC_VER:=2.33
|
||||
GLIBC_NAME:=glibc-$(GLIBC_VER)
|
||||
GLIBC_URL_BASE:=http://ftp.gnu.org/gnu/glibc
|
||||
GLIBC_LDSO?=$(PREFIX)/lib/ld-linux-aarch64.so.1
|
||||
|
||||
OUTPUT?="$(TARGET).patched"
|
||||
|
||||
all: build
|
||||
|
||||
build:
|
||||
git submodule update --init --recursive $(CS_TRACE)
|
||||
$(MAKE) -C $(CS_TRACE)
|
||||
cp $(CS_TRACE)/cs-proxy ../afl-cs-proxy
|
||||
|
||||
patch: | $(PATCHELF) $(GLIBC_LDSO)
|
||||
@if test -z "$(TARGET)"; then echo "TARGET is not set"; exit 1; fi
|
||||
$(PATCHELF) \
|
||||
--set-interpreter $(GLIBC_LDSO) \
|
||||
--set-rpath $(dir $(GLIBC_LDSO)) \
|
||||
--output $(OUTPUT) \
|
||||
$(TARGET)
|
||||
|
||||
$(PATCHELF): patchelf
|
||||
git submodule update --init $<
|
||||
cd $< && \
|
||||
./bootstrap.sh && \
|
||||
./configure --prefix=$(PREFIX) && \
|
||||
$(MAKE) && \
|
||||
$(MAKE) check && \
|
||||
$(MAKE) install
|
||||
|
||||
$(GLIBC_LDSO): | $(GLIBC_NAME).tar.xz
|
||||
tar -xf $(GLIBC_NAME).tar.xz
|
||||
for file in $(shell find $(PATCH_DIR) -maxdepth 1 -type f); do \
|
||||
patch -p1 < $$file ; \
|
||||
done
|
||||
mkdir -p $(GLIBC_NAME)/build
|
||||
cd $(GLIBC_NAME)/build && \
|
||||
../configure --prefix=$(PREFIX) && \
|
||||
$(MAKE) && \
|
||||
$(MAKE) install
|
||||
|
||||
$(GLIBC_NAME).tar.xz:
|
||||
wget -qO $@ $(GLIBC_URL_BASE)/$@
|
||||
|
||||
clean:
|
||||
$(MAKE) -C $(CS_TRACE) clean
|
||||
|
||||
.PHONY: all build patch clean
|
21
coresight_mode/Makefile
Normal file
@ -0,0 +1,21 @@
|
||||
#!/usr/bin/env make
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
# Copyright 2021 Ricerca Security, Inc. All rights reserved.
|
||||
|
||||
all:
|
||||
@echo trying to use GNU make...
|
||||
@gmake all || echo please install GNUmake
|
||||
|
||||
build:
|
||||
@echo trying to use GNU make...
|
||||
@gmake build || echo please install GNUmake
|
||||
|
||||
patch:
|
||||
@echo trying to use GNU make...
|
||||
@gmake patch || echo please install GNUmake
|
||||
|
||||
clean:
|
||||
@echo trying to use GNU make...
|
||||
@gmake clean || echo please install GNUmake
|
||||
|
||||
.PHONY: all build patch clean
|
70
coresight_mode/README.md
Normal file
@ -0,0 +1,70 @@
|
||||
# AFL++ CoreSight mode
|
||||
|
||||
CoreSight mode enables binary-only fuzzing on ARM64 Linux using CoreSight (ARM's hardware tracing technology).
|
||||
|
||||
NOTE: CoreSight mode is in the early development stage. Not applicable for production use.
|
||||
Currently the following hardware boards are supported:
|
||||
* NVIDIA Jetson TX2 (NVIDIA Parker)
|
||||
* NVIDIA Jetson Nano (NVIDIA Tegra X1)
|
||||
* GIGABYTE R181-T90 (Marvell ThunderX2 CN99XX)
|
||||
|
||||
## Getting started
|
||||
|
||||
Please read the [RICSec/coresight-trace README](https://github.com/RICSecLab/coresight-trace/blob/master/README.md) and check the prerequisites (capstone) before getting started.
|
||||
|
||||
CoreSight mode supports the AFL++ fork server mode to reduce `exec` system call
|
||||
overhead. To support it for binary-only fuzzing, it needs to modify the target
|
||||
ELF binary to re-link to the patched glibc. We employ this design from
|
||||
[PTrix](https://github.com/junxzm1990/afl-pt).
|
||||
|
||||
Check out all the git submodules in the `cs_mode` directory:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
### Build coresight-trace
|
||||
|
||||
There are some notes on building coresight-trace. Refer to the [README](https://github.com/RICSecLab/coresight-trace/blob/master/README.md) for the details. Run make in the `cs_mode` directory:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
Make sure `cs-proxy` is placed in the AFL++ root directory as `afl-cs-proxy`.
|
||||
|
||||
### Patch COTS binary
|
||||
|
||||
The fork server mode requires patchelf and the patched glibc. The dependency build can be done by just run make:
|
||||
|
||||
```bash
|
||||
make patch TARGET=$BIN
|
||||
```
|
||||
|
||||
The above make command builds and installs the dependencies to `$PREFIX` (default to `$PWD/.local`) at the first time. Then, it runs `patchelf` to `$BIN` with output `$OUTPUT` (`$BIN.patched` by default).
|
||||
|
||||
### Run afl-fuzz
|
||||
|
||||
Run `afl-fuzz` with `-A` option to use CoreSight mode.
|
||||
|
||||
```bash
|
||||
sudo afl-fuzz -A -i input -o output -- $OUTPUT @@
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
There are AFL++ CoreSight mode-specific environment variables for run-time configuration.
|
||||
|
||||
* `AFL_CS_CUSTOM_BIN` overrides the proxy application path. `afl-cs-proxy` will be used if not defined.
|
||||
|
||||
* `AFLCS_COV` specifies coverage type on CoreSight trace decoding. `edge` and `path` is supported. The default value is `edge`.
|
||||
* `AFLCS_UDMABUF` is the u-dma-buf device number used to store trace data in the DMA region. The default value is `0`.
|
||||
|
||||
## TODO List
|
||||
|
||||
* Eliminate modified glibc dependency
|
||||
* Support parallel fuzzing
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
This project has received funding from the Acquisition, Technology & Logistics Agency (ATLA) under the National Security Technology Research Promotion Fund 2021 (JPJ004596).
|
1
coresight_mode/coresight-trace
Submodule
1
coresight_mode/patchelf
Submodule
117
coresight_mode/patches/0001-Add-AFL-forkserver.patch
Normal file
@ -0,0 +1,117 @@
|
||||
diff --git a/glibc-2.33/elf/rtld.c b/glibc-2.33/elf/rtld.c
|
||||
index 596b6ac3..2ee270d4 100644
|
||||
--- a/glibc-2.33/elf/rtld.c
|
||||
+++ b/glibc-2.33/elf/rtld.c
|
||||
@@ -169,6 +169,99 @@ uintptr_t __pointer_chk_guard_local
|
||||
strong_alias (__pointer_chk_guard_local, __pointer_chk_guard)
|
||||
#endif
|
||||
|
||||
+#define AFLCS_RTLD 1
|
||||
+
|
||||
+#if AFLCS_RTLD
|
||||
+
|
||||
+#include <sys/shm.h>
|
||||
+#include <sys/types.h>
|
||||
+#include <sys/wait.h>
|
||||
+#include <dlfcn.h>
|
||||
+#include <signal.h>
|
||||
+
|
||||
+#include <asm/unistd.h>
|
||||
+#include <unistd.h>
|
||||
+
|
||||
+#define FORKSRV_FD 198
|
||||
+
|
||||
+#define AFLCS_ENABLE "__AFLCS_ENABLE"
|
||||
+
|
||||
+/* We use this additional AFLCS_# AFLCS_#+1 pair to communicate with proxy */
|
||||
+#define AFLCS_FORKSRV_FD (FORKSRV_FD - 3)
|
||||
+#define AFLCS_RTLD_SNIPPET do { __cs_start_forkserver(); } while(0)
|
||||
+
|
||||
+/* Fork server logic, invoked before we return from _dl_start. */
|
||||
+
|
||||
+static void __cs_start_forkserver(void) {
|
||||
+ int status;
|
||||
+ pid_t child_pid;
|
||||
+ static char tmp[4] = {0, 0, 0, 0};
|
||||
+
|
||||
+ if (!getenv(AFLCS_ENABLE)) {
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ if (write(AFLCS_FORKSRV_FD + 1, tmp, 4) != 4) {
|
||||
+ _exit(-1);
|
||||
+ }
|
||||
+
|
||||
+ /* All right, let's await orders... */
|
||||
+ while (1) {
|
||||
+ /* Whoops, parent dead? */
|
||||
+ if (read(AFLCS_FORKSRV_FD, tmp, 4) != 4) {
|
||||
+ _exit(1);
|
||||
+ }
|
||||
+
|
||||
+ child_pid = INLINE_SYSCALL(clone, 5,
|
||||
+ CLONE_CHILD_SETTID | CLONE_CHILD_CLEARTID | SIGCHLD, 0,
|
||||
+ NULL, NULL, &THREAD_SELF->tid);
|
||||
+ if (child_pid < 0) {
|
||||
+ _exit(4);
|
||||
+ }
|
||||
+ if (!child_pid) {
|
||||
+ /* Child process. Wait for parent start tracing */
|
||||
+ kill(getpid(), SIGSTOP);
|
||||
+ /* Close descriptors and run free. */
|
||||
+ close(AFLCS_FORKSRV_FD);
|
||||
+ close(AFLCS_FORKSRV_FD + 1);
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ /* Parent. */
|
||||
+ if (write(AFLCS_FORKSRV_FD + 1, &child_pid, 4) != 4) {
|
||||
+ _exit(5);
|
||||
+ }
|
||||
+
|
||||
+ /* Wait until SIGCONT is signaled. */
|
||||
+ if (waitpid(child_pid, &status, WCONTINUED) < 0) {
|
||||
+ _exit(6);
|
||||
+ }
|
||||
+ if (!WIFCONTINUED(status)) {
|
||||
+ /* Relay status to proxy. */
|
||||
+ if (write(AFLCS_FORKSRV_FD + 1, &status, 4) != 4) {
|
||||
+ _exit(7);
|
||||
+ }
|
||||
+ continue;
|
||||
+ }
|
||||
+ while (1) {
|
||||
+ /* Get status. */
|
||||
+ if (waitpid(child_pid, &status, WUNTRACED) < 0) {
|
||||
+ _exit(8);
|
||||
+ }
|
||||
+ /* Relay status to proxy. */
|
||||
+ if (write(AFLCS_FORKSRV_FD + 1, &status, 4) != 4) {
|
||||
+ _exit(9);
|
||||
+ }
|
||||
+ if (!(WIFSTOPPED(status) && WSTOPSIG(status) == SIGSTOP)) {
|
||||
+ /* The child process is exited. */
|
||||
+ break;
|
||||
+ }
|
||||
+ }
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+#endif /* AFLCS_RTLD */
|
||||
+
|
||||
/* Check that AT_SECURE=0, or that the passed name does not contain
|
||||
directories and is not overly long. Reject empty names
|
||||
unconditionally. */
|
||||
@@ -588,6 +681,12 @@ _dl_start (void *arg)
|
||||
# define ELF_MACHINE_START_ADDRESS(map, start) (start)
|
||||
#endif
|
||||
|
||||
+ /* AFL-CS-START */
|
||||
+#if AFLCS_RTLD
|
||||
+ AFLCS_RTLD_SNIPPET;
|
||||
+#endif
|
||||
+ /* AFL-CS-END */
|
||||
+
|
||||
return ELF_MACHINE_START_ADDRESS (GL(dl_ns)[LM_ID_BASE]._ns_loaded, entry);
|
||||
}
|
||||
}
|
@ -1,6 +1,6 @@
|
||||
# Custom Mutators
|
||||
|
||||
Custom mutators enhance and alter the mutation strategies of afl++.
|
||||
Custom mutators enhance and alter the mutation strategies of AFL++.
|
||||
For further information and documentation on how to write your own, read [the docs](../docs/custom_mutators.md).
|
||||
|
||||
## Examples
|
||||
@ -11,10 +11,11 @@ The `./examples` folder contains examples for custom mutators in python and C.
|
||||
|
||||
In `./rust`, you will find rust bindings, including a simple example in `./rust/example` and an example for structured fuzzing, based on lain, in`./rust/example_lain`.
|
||||
|
||||
## The afl++ Grammar Mutator
|
||||
## The AFL++ Grammar Mutator
|
||||
|
||||
If you use git to clone afl++, then the following will incorporate our
|
||||
If you use git to clone AFL++, then the following will incorporate our
|
||||
excellent grammar custom mutator:
|
||||
|
||||
```sh
|
||||
git submodule update --init
|
||||
```
|
||||
@ -40,7 +41,7 @@ Multiple custom mutators can be used by separating their paths with `:` in the e
|
||||
|
||||
### Superion Mutators
|
||||
|
||||
Adrian Tiron ported the Superion grammar fuzzer to afl++, it is WIP and
|
||||
Adrian Tiron ported the Superion grammar fuzzer to AFL++, it is WIP and
|
||||
requires cmake (among other things):
|
||||
[https://github.com/adrian-rt/superion-mutator](https://github.com/adrian-rt/superion-mutator)
|
||||
|
||||
@ -52,8 +53,8 @@ transforms protobuf raw:
|
||||
https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator
|
||||
|
||||
has a transform function you need to fill for your protobuf format, however
|
||||
needs to be ported to the updated afl++ custom mutator API (not much work):
|
||||
needs to be ported to the updated AFL++ custom mutator API (not much work):
|
||||
https://github.com/thebabush/afl-libprotobuf-mutator
|
||||
|
||||
same as above but is for current afl++:
|
||||
same as above but is for current AFL++:
|
||||
https://github.com/P1umer/AFLplusplus-protobuf-mutator
|
||||
|
@ -352,7 +352,7 @@ uint8_t afl_custom_queue_get(my_mutator_t *data, const uint8_t *filename) {
|
||||
* @return if the file contents was modified return 1 (True), 0 (False)
|
||||
* otherwise
|
||||
*/
|
||||
uint8_t afl_custom_queue_new_entry(my_mutator_t * data,
|
||||
uint8_t afl_custom_queue_new_entry(my_mutator_t *data,
|
||||
const uint8_t *filename_new_queue,
|
||||
const uint8_t *filename_orig_queue) {
|
||||
|
||||
|
@ -72,6 +72,7 @@
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "alloc-inl.h"
|
||||
|
||||
/* Header that must be present at the beginning of every test case: */
|
||||
|
||||
@ -127,9 +128,11 @@ size_t afl_custom_post_process(post_state_t *data, unsigned char *in_buf,
|
||||
}
|
||||
|
||||
/* Allocate memory for new buffer, reusing previous allocation if
|
||||
possible. */
|
||||
possible. Note we have to use afl-fuzz's own realloc!
|
||||
Note that you should only do this if you need to grow the buffer,
|
||||
otherwise work with in_buf, and assign it to *out_buf instead. */
|
||||
|
||||
*out_buf = realloc(data->buf, len);
|
||||
*out_buf = afl_realloc(out_buf, len);
|
||||
|
||||
/* If we're out of memory, the most graceful thing to do is to return the
|
||||
original buffer and give up on modifying it. Let AFL handle OOM on its
|
||||
@ -142,9 +145,9 @@ size_t afl_custom_post_process(post_state_t *data, unsigned char *in_buf,
|
||||
|
||||
}
|
||||
|
||||
/* Copy the original data to the new location. */
|
||||
|
||||
memcpy(*out_buf, in_buf, len);
|
||||
if (len > strlen(HEADER))
|
||||
memcpy(*out_buf + strlen(HEADER), in_buf + strlen(HEADER),
|
||||
len - strlen(HEADER));
|
||||
|
||||
/* Insert the new header. */
|
||||
|
||||
|
@ -29,8 +29,8 @@
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
#include <zlib.h>
|
||||
|
||||
#include <arpa/inet.h>
|
||||
#include "alloc-inl.h"
|
||||
|
||||
/* A macro to round an integer up to 4 kB. */
|
||||
|
||||
@ -70,9 +70,6 @@ size_t afl_custom_post_process(post_state_t *data, const unsigned char *in_buf,
|
||||
unsigned int len,
|
||||
const unsigned char **out_buf) {
|
||||
|
||||
unsigned char *new_buf = (unsigned char *)in_buf;
|
||||
unsigned int pos = 8;
|
||||
|
||||
/* Don't do anything if there's not enough room for the PNG header
|
||||
(8 bytes). */
|
||||
|
||||
@ -83,6 +80,22 @@ size_t afl_custom_post_process(post_state_t *data, const unsigned char *in_buf,
|
||||
|
||||
}
|
||||
|
||||
/* This is not a good way to do it, if you do not need to grow the buffer
|
||||
then just work with in_buf instead for speed reasons.
|
||||
But we want to show how to grow a buffer, so this is how it's done: */
|
||||
|
||||
unsigned int pos = 8;
|
||||
unsigned char *new_buf = afl_realloc(out_buf, UP4K(len));
|
||||
|
||||
if (!new_buf) {
|
||||
|
||||
*out_buf = in_buf;
|
||||
return len;
|
||||
|
||||
}
|
||||
|
||||
memcpy(new_buf, in_buf, len);
|
||||
|
||||
/* Minimum size of a zero-length PNG chunk is 12 bytes; if we
|
||||
don't have that, we can bail out. */
|
||||
|
||||
@ -111,33 +124,6 @@ size_t afl_custom_post_process(post_state_t *data, const unsigned char *in_buf,
|
||||
|
||||
if (real_cksum != file_cksum) {
|
||||
|
||||
/* First modification? Make a copy of the input buffer. Round size
|
||||
up to 4 kB to minimize the number of reallocs needed. */
|
||||
|
||||
if (new_buf == in_buf) {
|
||||
|
||||
if (len <= data->size) {
|
||||
|
||||
new_buf = data->buf;
|
||||
|
||||
} else {
|
||||
|
||||
new_buf = realloc(data->buf, UP4K(len));
|
||||
if (!new_buf) {
|
||||
|
||||
*out_buf = in_buf;
|
||||
return len;
|
||||
|
||||
}
|
||||
|
||||
data->buf = new_buf;
|
||||
data->size = UP4K(len);
|
||||
memcpy(new_buf, in_buf, len);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
*(uint32_t *)(new_buf + pos + 8 + chunk_len) = real_cksum;
|
||||
|
||||
}
|
||||
|
@ -1,19 +1,19 @@
|
||||
# GramaTron
|
||||
|
||||
Gramatron is a coverage-guided fuzzer that uses grammar automatons to perform
|
||||
grammar-aware fuzzing. Technical details about our framework are available
|
||||
in the [ISSTA'21 paper](https://nebelwelt.net/files/21ISSTA.pdf).
|
||||
The artifact to reproduce the experiments presented in the paper are present
|
||||
in `artifact/`. Instructions to run a sample campaign and incorporate new
|
||||
grammars is presented below:
|
||||
GramaTron is a coverage-guided fuzzer that uses grammar automatons to perform
|
||||
grammar-aware fuzzing. Technical details about our framework are available in
|
||||
the [ISSTA'21 paper](https://nebelwelt.net/files/21ISSTA.pdf). The artifact to
|
||||
reproduce the experiments presented in the paper are present in `artifact/`.
|
||||
Instructions to run a sample campaign and incorporate new grammars is presented
|
||||
below:
|
||||
|
||||
# Compiling
|
||||
## Compiling
|
||||
|
||||
Simply execute `./build_gramatron_mutator.sh`
|
||||
Execute `./build_gramatron_mutator.sh`.
|
||||
|
||||
# Running
|
||||
## Running
|
||||
|
||||
You have to set the grammar file to use with `GRAMMATRON_AUTOMATION`:
|
||||
You have to set the grammar file to use with `GRAMATRON_AUTOMATION`:
|
||||
|
||||
```
|
||||
export AFL_DISABLE_TRIM=1
|
||||
@ -23,23 +23,27 @@ export GRAMATRON_AUTOMATION=grammars/ruby/source_automata.json
|
||||
afl-fuzz -i in -o out -- ./target
|
||||
```
|
||||
|
||||
# Adding and testing a new grammar
|
||||
## Adding and testing a new grammar
|
||||
|
||||
- Specify in a JSON format for CFG. Examples are correspond `source.json` files
|
||||
- Specify in a JSON format for CFG. Examples are correspond `source.json` files.
|
||||
- Run the automaton generation script (in `src/gramfuzz-mutator/preprocess`)
|
||||
which will place the generated automaton in the same folder.
|
||||
```
|
||||
./preprocess/prep_automaton.sh <grammar_file> <start_symbol> [stack_limit]
|
||||
|
||||
Eg. ./preprocess/prep_automaton.sh ~/grammars/ruby/source.json PROGRAM
|
||||
```
|
||||
- If the grammar has no self-embedding rules then you do not need to pass the
|
||||
stack limit parameter. However, if it does have self-embedding rules then you
|
||||
```
|
||||
./preprocess/prep_automaton.sh <grammar_file> <start_symbol> [stack_limit]
|
||||
|
||||
E.g., ./preprocess/prep_automaton.sh ~/grammars/ruby/source.json PROGRAM
|
||||
```
|
||||
|
||||
- If the grammar has no self-embedding rules, then you do not need to pass the
|
||||
stack limit parameter. However, if it does have self-embedding rules, then you
|
||||
need to pass the stack limit parameter. We recommend starting with `5` and
|
||||
then increasing it if you need more complexity
|
||||
- To sanity-check that the automaton is generating inputs as expected you can use the `test` binary housed in `src/gramfuzz-mutator`
|
||||
```
|
||||
./test SanityCheck <automaton_file>
|
||||
then increasing it if you need more complexity.
|
||||
- To sanity-check that the automaton is generating inputs as expected, you can
|
||||
use the `test` binary housed in `src/gramfuzz-mutator`.
|
||||
|
||||
Eg. ./test SanityCheck ~/grammars/ruby/source_automata.json
|
||||
```
|
||||
```
|
||||
./test SanityCheck <automaton_file>
|
||||
|
||||
E.g., ./test SanityCheck ~/grammars/ruby/source_automata.json
|
||||
```
|
@ -11,7 +11,7 @@
|
||||
# Adapted for AFLplusplus by Dominik Maier <mail@dmnk.co>
|
||||
#
|
||||
# Copyright 2017 Battelle Memorial Institute. All rights reserved.
|
||||
# Copyright 2019-2020 AFLplusplus Project. All rights reserved.
|
||||
# Copyright 2019-2022 AFLplusplus Project. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@ -49,6 +49,13 @@ if [ ! -f "../../config.h" ]; then
|
||||
|
||||
fi
|
||||
|
||||
if [ ! -f "../../src/afl-performance.o" ]; then
|
||||
|
||||
echo "[-] Error: you must build afl-fuzz first and not do a \"make clean\""
|
||||
exit 1
|
||||
|
||||
fi
|
||||
|
||||
PYTHONBIN=`command -v python3 || command -v python || command -v python2 || echo python3`
|
||||
MAKECMD=make
|
||||
TARCMD=tar
|
||||
@ -108,9 +115,9 @@ if [ $? -eq 0 ]; then
|
||||
git submodule update ./json-c 2>/dev/null # ignore errors
|
||||
else
|
||||
echo "[*] cloning json-c"
|
||||
test -d json-c || {
|
||||
test -d json-c/.git || {
|
||||
CNT=1
|
||||
while [ '!' -d json-c -a "$CNT" -lt 4 ]; do
|
||||
while [ '!' -d json-c/.git -a "$CNT" -lt 4 ]; do
|
||||
echo "Trying to clone json-c (attempt $CNT/3)"
|
||||
git clone "$JSONC_REPO"
|
||||
CNT=`expr "$CNT" + 1`
|
||||
@ -118,23 +125,25 @@ else
|
||||
}
|
||||
fi
|
||||
|
||||
test -d json-c || { echo "[-] not checked out, please install git or check your internet connection." ; exit 1 ; }
|
||||
test -d json-c/.git || { echo "[-] not checked out, please install git or check your internet connection." ; exit 1 ; }
|
||||
echo "[+] Got json-c."
|
||||
|
||||
cd "json-c" || exit 1
|
||||
echo "[*] Checking out $JSONC_VERSION"
|
||||
sh -c 'git stash && git stash drop' 1>/dev/null 2>/dev/null
|
||||
git checkout "$JSONC_VERSION" || exit 1
|
||||
sh autogen.sh || exit 1
|
||||
export CFLAGS=-fPIC
|
||||
./configure --disable-shared || exit 1
|
||||
make || exit 1
|
||||
cd ..
|
||||
test -e json-c/.libs/libjson-c.a || {
|
||||
cd "json-c" || exit 1
|
||||
echo "[*] Checking out $JSONC_VERSION"
|
||||
sh -c 'git stash && git stash drop' 1>/dev/null 2>/dev/null
|
||||
git checkout "$JSONC_VERSION" || exit 1
|
||||
sh autogen.sh || exit 1
|
||||
export CFLAGS=-fPIC
|
||||
./configure --disable-shared || exit 1
|
||||
make || exit 1
|
||||
cd ..
|
||||
}
|
||||
|
||||
echo
|
||||
echo
|
||||
echo "[+] Json-c successfully prepared!"
|
||||
echo "[+] Builing gramatron now."
|
||||
$CC -O3 -g -fPIC -Wno-unused-result -Wl,--allow-multiple-definition -I../../include -o gramatron.so -shared -I. -I/prg/dev/include gramfuzz.c gramfuzz-helpers.c gramfuzz-mutators.c gramfuzz-util.c hashmap.c json-c/.libs/libjson-c.a || exit 1
|
||||
$CC -O3 -g -fPIC -Wno-unused-result -Wl,--allow-multiple-definition -I../../include -o gramatron.so -shared -I. -I/prg/dev/include gramfuzz.c gramfuzz-helpers.c gramfuzz-mutators.c gramfuzz-util.c hashmap.c ../../src/afl-performance.o json-c/.libs/libjson-c.a || exit 1
|
||||
echo
|
||||
echo "[+] gramatron successfully built!"
|
||||
|
@ -168,7 +168,7 @@ my_mutator_t *afl_custom_init(afl_state_t *afl, unsigned int seed) {
|
||||
|
||||
fprintf(stderr,
|
||||
"\nError: GrammaTron needs an automation json file set in "
|
||||
"AFL_GRAMATRON_AUTOMATON\n");
|
||||
"GRAMATRON_AUTOMATION\n");
|
||||
exit(-1);
|
||||
|
||||
}
|
||||
@ -211,7 +211,7 @@ size_t afl_custom_fuzz(my_mutator_t *data, uint8_t *buf, size_t buf_size,
|
||||
} else if (data->mut_idx == 2) { // Perform splice mutation
|
||||
|
||||
// we cannot use the supplied splice data so choose a new random file
|
||||
u32 tid = rand_below(global_afl, data->afl->queued_paths);
|
||||
u32 tid = rand_below(global_afl, data->afl->queued_items);
|
||||
struct queue_entry *q = data->afl->queue_buf[tid];
|
||||
|
||||
// Read the input representation for the splice candidate
|
||||
|
@ -1 +1 @@
|
||||
b79d51a
|
||||
ff4e5a2
|
||||
|
@ -14,7 +14,7 @@
|
||||
# <andreafioraldi@gmail.com>
|
||||
#
|
||||
# Copyright 2017 Battelle Memorial Institute. All rights reserved.
|
||||
# Copyright 2019-2020 AFLplusplus Project. All rights reserved.
|
||||
# Copyright 2019-2022 AFLplusplus Project. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@ -109,9 +109,9 @@ if [ $? -eq 0 ]; then
|
||||
git submodule update ./grammar_mutator 2>/dev/null # ignore errors
|
||||
else
|
||||
echo "[*] cloning grammar mutator"
|
||||
test -d grammar_mutator || {
|
||||
test -d grammar_mutator/.git || {
|
||||
CNT=1
|
||||
while [ '!' -d grammar_mutator -a "$CNT" -lt 4 ]; do
|
||||
while [ '!' -d grammar_mutator/.git -a "$CNT" -lt 4 ]; do
|
||||
echo "Trying to clone grammar_mutator (attempt $CNT/3)"
|
||||
git clone "$GRAMMAR_REPO"
|
||||
CNT=`expr "$CNT" + 1`
|
||||
@ -119,15 +119,16 @@ else
|
||||
}
|
||||
fi
|
||||
|
||||
test -d grammar_mutator || { echo "[-] not checked out, please install git or check your internet connection." ; exit 1 ; }
|
||||
test -f grammar_mutator/.git || { echo "[-] not checked out, please install git or check your internet connection." ; exit 1 ; }
|
||||
echo "[+] Got grammar mutator."
|
||||
|
||||
cd "grammar_mutator" || exit 1
|
||||
echo "[*] Checking out $GRAMMAR_VERSION"
|
||||
git pull >/dev/null 2>&1
|
||||
sh -c 'git stash && git stash drop' 1>/dev/null 2>/dev/null
|
||||
git checkout "$GRAMMAR_VERSION" || exit 1
|
||||
echo "[*] Downloading antlr..."
|
||||
wget -c https://www.antlr.org/download/antlr-4.8-complete.jar
|
||||
wget -q https://www.antlr.org/download/antlr-4.8-complete.jar
|
||||
cd ..
|
||||
|
||||
echo
|
||||
|
@ -1,7 +1,7 @@
|
||||
# custum mutator: honggfuzz mangle
|
||||
|
||||
this is the honggfuzz mutator in mangle.c as a custom mutator
|
||||
module for afl++. It is the original mangle.c, mangle.h and honggfuzz.h
|
||||
module for AFL++. It is the original mangle.c, mangle.h and honggfuzz.h
|
||||
with a lot of mocking around it :-)
|
||||
|
||||
just type `make` to build
|
||||
|
10
custom_mutators/libafl_base/.gitignore
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
# Generated by Cargo
|
||||
# will have compiled files and executables
|
||||
/target/
|
||||
|
||||
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
|
||||
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
|
||||
Cargo.lock
|
||||
|
||||
# These are backup files generated by rustfmt
|
||||
**/*.rs.bk
|
14
custom_mutators/libafl_base/Cargo.toml
Normal file
@ -0,0 +1,14 @@
|
||||
[package]
|
||||
name = "libafl_base"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
libafl = { git = "https://github.com/AFLplusplus/LibAFL.git", rev = "62614ce1016c86e3f00f35b56399292ceabd486b" }
|
||||
custom_mutator = { path = "../rust/custom_mutator", features = ["afl_internals"] }
|
||||
serde = { version = "1.0", default-features = false, features = ["alloc"] } # serialization lib
|
||||
|
||||
[lib]
|
||||
crate-type = ["cdylib"]
|
9
custom_mutators/libafl_base/Makefile
Normal file
@ -0,0 +1,9 @@
|
||||
all: target/release/liblibafl_base.so
|
||||
cp target/release/liblibafl_base.so libafl_base.so
|
||||
|
||||
target/release/liblibafl_base.so: src/lib.rs
|
||||
cargo build --release
|
||||
|
||||
clean:
|
||||
cargo clean
|
||||
rm -f libafl_base.so
|
11
custom_mutators/libafl_base/README.md
Normal file
@ -0,0 +1,11 @@
|
||||
# libafl basic havoc + token mutator
|
||||
|
||||
This uses the [libafl](https://github.com/AFLplusplus/libafl) StdScheduledMutator with `havoc_mutations` and `token_mutations`.
|
||||
|
||||
Make sure to have [cargo installed](https://rustup.rs/) and just type `make` to build.
|
||||
|
||||
Run with:
|
||||
|
||||
```
|
||||
AFL_CUSTOM_MUTATOR_LIBRARY=custom_mutators/libafl_base/libafl_base.so AFL_CUSTOM_MUTATOR_ONLY=1 afl-fuzz ...
|
||||
```
|
238
custom_mutators/libafl_base/src/lib.rs
Normal file
@ -0,0 +1,238 @@
|
||||
#![cfg(unix)]
|
||||
#![allow(unused_variables)]
|
||||
|
||||
use serde::{Deserialize, Deserializer, Serialize, Serializer};
|
||||
use std::{
|
||||
cell::{RefCell, UnsafeCell},
|
||||
collections::HashMap,
|
||||
ffi::CStr,
|
||||
};
|
||||
|
||||
use custom_mutator::{afl_state, export_mutator, CustomMutator};
|
||||
|
||||
use libafl::{
|
||||
bolts::{rands::StdRand, serdeany::SerdeAnyMap, tuples::Merge},
|
||||
corpus::{Corpus, Testcase},
|
||||
inputs::{BytesInput, HasBytesVec},
|
||||
mutators::{
|
||||
scheduled::{havoc_mutations, tokens_mutations, StdScheduledMutator, Tokens},
|
||||
Mutator,
|
||||
},
|
||||
state::{HasCorpus, HasMaxSize, HasMetadata, HasRand, State},
|
||||
Error,
|
||||
};
|
||||
|
||||
const MAX_FILE: usize = 1 * 1024 * 1024;
|
||||
|
||||
static mut AFL: Option<&'static afl_state> = None;
|
||||
static mut CURRENT_ENTRY: Option<usize> = None;
|
||||
|
||||
fn afl() -> &'static afl_state {
|
||||
unsafe { AFL.unwrap() }
|
||||
}
|
||||
|
||||
#[derive(Default, Debug)]
|
||||
pub struct AFLCorpus {
|
||||
entries: UnsafeCell<HashMap<usize, RefCell<Testcase<BytesInput>>>>,
|
||||
}
|
||||
|
||||
impl Clone for AFLCorpus {
|
||||
fn clone(&self) -> Self {
|
||||
unsafe {
|
||||
Self {
|
||||
entries: UnsafeCell::new(self.entries.get().as_ref().unwrap().clone()),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Serialize for AFLCorpus {
|
||||
fn serialize<S>(&self, _serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: Serializer,
|
||||
{
|
||||
unimplemented!();
|
||||
}
|
||||
}
|
||||
|
||||
impl<'de> Deserialize<'de> for AFLCorpus {
|
||||
fn deserialize<D>(_deserializer: D) -> Result<Self, D::Error>
|
||||
where
|
||||
D: Deserializer<'de>,
|
||||
{
|
||||
unimplemented!();
|
||||
}
|
||||
}
|
||||
|
||||
impl Corpus<BytesInput> for AFLCorpus {
|
||||
#[inline]
|
||||
fn count(&self) -> usize {
|
||||
afl().queued_items as usize
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn add(&mut self, testcase: Testcase<BytesInput>) -> Result<usize, Error> {
|
||||
unimplemented!();
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn replace(&mut self, idx: usize, testcase: Testcase<BytesInput>) -> Result<(), Error> {
|
||||
unimplemented!();
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn remove(&mut self, idx: usize) -> Result<Option<Testcase<BytesInput>>, Error> {
|
||||
unimplemented!();
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn get(&self, idx: usize) -> Result<&RefCell<Testcase<BytesInput>>, Error> {
|
||||
unsafe {
|
||||
let entries = self.entries.get().as_mut().unwrap();
|
||||
entries.entry(idx).or_insert_with(|| {
|
||||
let queue_buf = std::slice::from_raw_parts_mut(afl().queue_buf, self.count());
|
||||
let entry = queue_buf[idx].as_mut().unwrap();
|
||||
let fname = CStr::from_ptr((entry.fname as *mut i8).as_ref().unwrap())
|
||||
.to_str()
|
||||
.unwrap()
|
||||
.to_owned();
|
||||
let mut testcase = Testcase::with_filename(BytesInput::new(vec![]), fname);
|
||||
*testcase.input_mut() = None;
|
||||
RefCell::new(testcase)
|
||||
});
|
||||
Ok(&self.entries.get().as_ref().unwrap()[&idx])
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn current(&self) -> &Option<usize> {
|
||||
unsafe {
|
||||
CURRENT_ENTRY = Some(afl().current_entry as usize);
|
||||
&CURRENT_ENTRY
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn current_mut(&mut self) -> &mut Option<usize> {
|
||||
unimplemented!();
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Clone, Debug)]
|
||||
pub struct AFLState {
|
||||
rand: StdRand,
|
||||
corpus: AFLCorpus,
|
||||
metadata: SerdeAnyMap,
|
||||
max_size: usize,
|
||||
}
|
||||
|
||||
impl AFLState {
|
||||
pub fn new(seed: u32) -> Self {
|
||||
Self {
|
||||
rand: StdRand::with_seed(seed as u64),
|
||||
corpus: AFLCorpus::default(),
|
||||
metadata: SerdeAnyMap::new(),
|
||||
max_size: MAX_FILE,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl State for AFLState {}
|
||||
|
||||
impl HasRand for AFLState {
|
||||
type Rand = StdRand;
|
||||
|
||||
#[inline]
|
||||
fn rand(&self) -> &Self::Rand {
|
||||
&self.rand
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn rand_mut(&mut self) -> &mut Self::Rand {
|
||||
&mut self.rand
|
||||
}
|
||||
}
|
||||
|
||||
impl HasCorpus<BytesInput> for AFLState {
|
||||
type Corpus = AFLCorpus;
|
||||
|
||||
#[inline]
|
||||
fn corpus(&self) -> &Self::Corpus {
|
||||
&self.corpus
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn corpus_mut(&mut self) -> &mut Self::Corpus {
|
||||
&mut self.corpus
|
||||
}
|
||||
}
|
||||
|
||||
impl HasMetadata for AFLState {
|
||||
#[inline]
|
||||
fn metadata(&self) -> &SerdeAnyMap {
|
||||
&self.metadata
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn metadata_mut(&mut self) -> &mut SerdeAnyMap {
|
||||
&mut self.metadata
|
||||
}
|
||||
}
|
||||
|
||||
impl HasMaxSize for AFLState {
|
||||
fn max_size(&self) -> usize {
|
||||
self.max_size
|
||||
}
|
||||
|
||||
fn set_max_size(&mut self, max_size: usize) {
|
||||
self.max_size = max_size;
|
||||
}
|
||||
}
|
||||
|
||||
struct LibAFLBaseCustomMutator {
|
||||
state: AFLState,
|
||||
input: BytesInput,
|
||||
}
|
||||
|
||||
impl CustomMutator for LibAFLBaseCustomMutator {
|
||||
type Error = libafl::Error;
|
||||
|
||||
fn init(afl: &'static afl_state, seed: u32) -> Result<Self, Self::Error> {
|
||||
unsafe {
|
||||
AFL = Some(afl);
|
||||
let mut state = AFLState::new(seed);
|
||||
let extras = std::slice::from_raw_parts(afl.extras, afl.extras_cnt as usize);
|
||||
let mut tokens = vec![];
|
||||
for extra in extras {
|
||||
let data = std::slice::from_raw_parts(extra.data, extra.len as usize);
|
||||
tokens.push(data.to_vec());
|
||||
}
|
||||
if !tokens.is_empty() {
|
||||
state.add_metadata(Tokens::new(tokens));
|
||||
}
|
||||
Ok(Self {
|
||||
state,
|
||||
input: BytesInput::new(vec![]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn fuzz<'b, 's: 'b>(
|
||||
&'s mut self,
|
||||
buffer: &'b mut [u8],
|
||||
add_buff: Option<&[u8]>,
|
||||
max_size: usize,
|
||||
) -> Result<Option<&'b [u8]>, Self::Error> {
|
||||
self.state.set_max_size(max_size);
|
||||
|
||||
// TODO avoid copy
|
||||
self.input.bytes_mut().clear();
|
||||
self.input.bytes_mut().extend_from_slice(buffer);
|
||||
|
||||
let mut mutator = StdScheduledMutator::new(havoc_mutations().merge(tokens_mutations()));
|
||||
mutator.mutate(&mut self.state, &mut self.input, 0)?;
|
||||
Ok(Some(self.input.bytes()))
|
||||
}
|
||||
}
|
||||
|
||||
export_mutator!(LibAFLBaseCustomMutator);
|
@ -1086,6 +1086,7 @@ ATTRIBUTE_INTERFACE size_t LLVMFuzzerMutate(uint8_t *Data, size_t Size,
|
||||
size_t MaxSize) {
|
||||
|
||||
assert(fuzzer::F);
|
||||
fuzzer::F->GetMD().StartMutationSequence();
|
||||
size_t r = fuzzer::F->GetMD().DefaultMutate(Data, Size, MaxSize);
|
||||
#ifdef INTROSPECTION
|
||||
introspection_ptr = fuzzer::F->GetMD().WriteMutationSequence();
|
||||
|
@ -11,9 +11,11 @@ Note that this is currently a simple implementation and it is missing two featur
|
||||
* Dictionary support
|
||||
|
||||
To update the source, all that is needed is that FuzzerDriver.cpp has to receive
|
||||
|
||||
```
|
||||
#include "libfuzzer.inc"
|
||||
```
|
||||
|
||||
before the closing namespace bracket.
|
||||
|
||||
It is also libfuzzer.inc where the configuration of the libfuzzer mutations
|
||||
@ -21,4 +23,4 @@ are done.
|
||||
|
||||
> Original repository: https://github.com/llvm/llvm-project
|
||||
> Path: compiler-rt/lib/fuzzer/*.{h|cpp}
|
||||
> Source commit: df3e903655e2499968fc7af64fb5fa52b2ee79bb
|
||||
> Source commit: df3e903655e2499968fc7af64fb5fa52b2ee79bb
|
@ -2,7 +2,7 @@
|
||||
|
||||
extern "C" ATTRIBUTE_INTERFACE void
|
||||
LLVMFuzzerMyInit(int (*Callback)(const uint8_t *Data, size_t Size), unsigned int Seed) {
|
||||
Random Rand(Seed);
|
||||
auto *Rand = new Random(Seed);
|
||||
FuzzingOptions Options;
|
||||
Options.Verbosity = 3;
|
||||
Options.MaxLen = 1024000;
|
||||
@ -30,7 +30,7 @@ LLVMFuzzerMyInit(int (*Callback)(const uint8_t *Data, size_t Size), unsigned int
|
||||
struct EntropicOptions Entropic;
|
||||
Entropic.Enabled = Options.Entropic;
|
||||
EF = new ExternalFunctions();
|
||||
auto *MD = new MutationDispatcher(Rand, Options);
|
||||
auto *MD = new MutationDispatcher(*Rand, Options);
|
||||
auto *Corpus = new InputCorpus(Options.OutputCorpus, Entropic);
|
||||
auto *F = new Fuzzer(Callback, *Corpus, *MD, Options);
|
||||
}
|
||||
|
@ -99,10 +99,12 @@ extern "C" size_t afl_custom_fuzz(MyMutator *mutator, // return value from afl_c
|
||||
std::string s = ProtoToData(*p);
|
||||
// Copy to a new buffer ( mutated_out )
|
||||
size_t mutated_size = s.size() <= max_size ? s.size() : max_size; // check if raw data's size is larger than max_size
|
||||
uint8_t *mutated_out = new uint8_t[mutated_size+1];
|
||||
memcpy(mutated_out, s.c_str(), mutated_size); // copy the mutated data
|
||||
|
||||
delete[] mutator->mutated_out;
|
||||
mutator->mutated_out = new uint8_t[mutated_size];
|
||||
memcpy(mutator->mutated_out, s.c_str(), mutated_size); // copy the mutated data
|
||||
// Assign the mutated data and return mutated_size
|
||||
*out_buf = mutated_out;
|
||||
*out_buf = mutator->mutated_out;
|
||||
return mutated_size;
|
||||
}
|
||||
|
||||
|
@ -2,4 +2,9 @@
|
||||
#include "test.pb.h"
|
||||
|
||||
class MyMutator : public protobuf_mutator::Mutator {
|
||||
public:
|
||||
uint8_t *mutated_out = nullptr;
|
||||
~MyMutator() {
|
||||
delete[] mutated_out;
|
||||
}
|
||||
};
|
||||
|
@ -4473,6 +4473,10 @@ static word prim_sys(word op, word a, word b, word c) {
|
||||
FD_CLOEXEC,
|
||||
F_DUPFD,
|
||||
F_DUPFD_CLOEXEC,
|
||||
#if defined(F_DUP2FD)
|
||||
F_DUP2FD,
|
||||
F_DUP2FD_CLOEXEC,
|
||||
#endif
|
||||
F_GETFD,
|
||||
F_SETFD,
|
||||
F_GETFL,
|
||||
|
@ -53,7 +53,11 @@ pub trait RawCustomMutator {
|
||||
1
|
||||
}
|
||||
|
||||
fn queue_new_entry(&mut self, filename_new_queue: &Path, _filename_orig_queue: Option<&Path>) -> bool {
|
||||
fn queue_new_entry(
|
||||
&mut self,
|
||||
filename_new_queue: &Path,
|
||||
_filename_orig_queue: Option<&Path>,
|
||||
) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
@ -86,7 +90,6 @@ pub mod wrappers {
|
||||
|
||||
use std::{
|
||||
any::Any,
|
||||
convert::TryInto,
|
||||
ffi::{c_void, CStr, OsStr},
|
||||
mem::ManuallyDrop,
|
||||
os::{raw::c_char, unix::ffi::OsStrExt},
|
||||
@ -178,6 +181,10 @@ pub mod wrappers {
|
||||
}
|
||||
|
||||
/// Internal function used in the macro
|
||||
/// # Safety
|
||||
///
|
||||
/// May dereference all passed-in pointers.
|
||||
/// Should not be called manually, but will be called by `afl-fuzz`
|
||||
pub unsafe fn afl_custom_fuzz_<M: RawCustomMutator>(
|
||||
data: *mut c_void,
|
||||
buf: *mut u8,
|
||||
@ -201,13 +208,10 @@ pub mod wrappers {
|
||||
} else {
|
||||
Some(slice::from_raw_parts(add_buf, add_buf_size))
|
||||
};
|
||||
match context
|
||||
.mutator
|
||||
.fuzz(buff_slice, add_buff_slice, max_size.try_into().unwrap())
|
||||
{
|
||||
match context.mutator.fuzz(buff_slice, add_buff_slice, max_size) {
|
||||
Some(buffer) => {
|
||||
*out_buf = buffer.as_ptr();
|
||||
buffer.len().try_into().unwrap()
|
||||
buffer.len()
|
||||
}
|
||||
None => {
|
||||
// return the input buffer with 0-length to let AFL skip this mutation attempt
|
||||
@ -222,6 +226,10 @@ pub mod wrappers {
|
||||
}
|
||||
|
||||
/// Internal function used in the macro
|
||||
///
|
||||
/// # Safety
|
||||
/// Dereferences the passed-in pointers up to `buf_size` bytes.
|
||||
/// Should not be called directly.
|
||||
pub unsafe fn afl_custom_fuzz_count_<M: RawCustomMutator>(
|
||||
data: *mut c_void,
|
||||
buf: *const u8,
|
||||
@ -266,7 +274,7 @@ pub mod wrappers {
|
||||
};
|
||||
context
|
||||
.mutator
|
||||
.queue_new_entry(filename_new_queue, filename_orig_queue);
|
||||
.queue_new_entry(filename_new_queue, filename_orig_queue)
|
||||
}) {
|
||||
Ok(ret) => ret,
|
||||
Err(err) => panic_handler("afl_custom_queue_new_entry", err),
|
||||
@ -274,6 +282,10 @@ pub mod wrappers {
|
||||
}
|
||||
|
||||
/// Internal function used in the macro
|
||||
///
|
||||
/// # Safety
|
||||
/// May dereference the passed-in `data` pointer.
|
||||
/// Should not be called directly.
|
||||
pub unsafe fn afl_custom_deinit_<M: RawCustomMutator>(data: *mut c_void) {
|
||||
match catch_unwind(|| {
|
||||
// drop the context
|
||||
@ -346,6 +358,36 @@ pub mod wrappers {
|
||||
}
|
||||
}
|
||||
|
||||
/// An exported macro to defined afl_custom_init meant for insternal usage
|
||||
#[cfg(feature = "afl_internals")]
|
||||
#[macro_export]
|
||||
macro_rules! _define_afl_custom_init {
|
||||
($mutator_type:ty) => {
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_init(
|
||||
afl: ::std::option::Option<&'static $crate::afl_state>,
|
||||
seed: ::std::os::raw::c_uint,
|
||||
) -> *const ::std::os::raw::c_void {
|
||||
$crate::wrappers::afl_custom_init_::<$mutator_type>(afl, seed as u32)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/// An exported macro to defined afl_custom_init meant for insternal usage
|
||||
#[cfg(not(feature = "afl_internals"))]
|
||||
#[macro_export]
|
||||
macro_rules! _define_afl_custom_init {
|
||||
($mutator_type:ty) => {
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_init(
|
||||
_afl: *const ::std::os::raw::c_void,
|
||||
seed: ::std::os::raw::c_uint,
|
||||
) -> *const ::std::os::raw::c_void {
|
||||
$crate::wrappers::afl_custom_init_::<$mutator_type>(seed as u32)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/// exports the given Mutator as a custom mutator as the C interface that AFL++ expects.
|
||||
/// It is not possible to call this macro multiple times, because it would define the custom mutator symbols multiple times.
|
||||
/// # Example
|
||||
@ -369,37 +411,19 @@ pub mod wrappers {
|
||||
#[macro_export]
|
||||
macro_rules! export_mutator {
|
||||
($mutator_type:ty) => {
|
||||
#[cfg(feature = "afl_internals")]
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_init(
|
||||
afl: ::std::option::Option<&'static $crate::afl_state>,
|
||||
seed: ::std::os::raw::c_uint,
|
||||
) -> *const ::std::os::raw::c_void {
|
||||
$crate::wrappers::afl_custom_init_::<$mutator_type>(afl, seed as u32)
|
||||
}
|
||||
|
||||
#[cfg(not(feature = "afl_internals"))]
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_init(
|
||||
_afl: *const ::std::os::raw::c_void,
|
||||
seed: ::std::os::raw::c_uint,
|
||||
) -> *const ::std::os::raw::c_void {
|
||||
$crate::wrappers::afl_custom_init_::<$mutator_type>(seed as u32)
|
||||
}
|
||||
$crate::_define_afl_custom_init!($mutator_type);
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_fuzz_count(
|
||||
pub unsafe extern "C" fn afl_custom_fuzz_count(
|
||||
data: *mut ::std::os::raw::c_void,
|
||||
buf: *const u8,
|
||||
buf_size: usize,
|
||||
) -> u32 {
|
||||
unsafe {
|
||||
$crate::wrappers::afl_custom_fuzz_count_::<$mutator_type>(data, buf, buf_size)
|
||||
}
|
||||
$crate::wrappers::afl_custom_fuzz_count_::<$mutator_type>(data, buf, buf_size)
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_fuzz(
|
||||
pub unsafe extern "C" fn afl_custom_fuzz(
|
||||
data: *mut ::std::os::raw::c_void,
|
||||
buf: *mut u8,
|
||||
buf_size: usize,
|
||||
@ -408,17 +432,15 @@ macro_rules! export_mutator {
|
||||
add_buf_size: usize,
|
||||
max_size: usize,
|
||||
) -> usize {
|
||||
unsafe {
|
||||
$crate::wrappers::afl_custom_fuzz_::<$mutator_type>(
|
||||
data,
|
||||
buf,
|
||||
buf_size,
|
||||
out_buf,
|
||||
add_buf,
|
||||
add_buf_size,
|
||||
max_size,
|
||||
)
|
||||
}
|
||||
$crate::wrappers::afl_custom_fuzz_::<$mutator_type>(
|
||||
data,
|
||||
buf,
|
||||
buf_size,
|
||||
out_buf,
|
||||
add_buf,
|
||||
add_buf_size,
|
||||
max_size,
|
||||
)
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
@ -426,7 +448,7 @@ macro_rules! export_mutator {
|
||||
data: *mut ::std::os::raw::c_void,
|
||||
filename_new_queue: *const ::std::os::raw::c_char,
|
||||
filename_orig_queue: *const ::std::os::raw::c_char,
|
||||
) {
|
||||
) -> bool {
|
||||
$crate::wrappers::afl_custom_queue_new_entry_::<$mutator_type>(
|
||||
data,
|
||||
filename_new_queue,
|
||||
@ -458,8 +480,8 @@ macro_rules! export_mutator {
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn afl_custom_deinit(data: *mut ::std::os::raw::c_void) {
|
||||
unsafe { $crate::wrappers::afl_custom_deinit_::<$mutator_type>(data) }
|
||||
pub unsafe extern "C" fn afl_custom_deinit(data: *mut ::std::os::raw::c_void) {
|
||||
$crate::wrappers::afl_custom_deinit_::<$mutator_type>(data)
|
||||
}
|
||||
};
|
||||
}
|
||||
@ -544,8 +566,8 @@ pub trait CustomMutator {
|
||||
&mut self,
|
||||
filename_new_queue: &Path,
|
||||
filename_orig_queue: Option<&Path>,
|
||||
) -> Result<(), Self::Error> {
|
||||
Ok(())
|
||||
) -> Result<bool, Self::Error> {
|
||||
Ok(false)
|
||||
}
|
||||
|
||||
fn queue_get(&mut self, filename: &Path) -> Result<bool, Self::Error> {
|
||||
@ -619,11 +641,16 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
fn queue_new_entry(&mut self, filename_new_queue: &Path, filename_orig_queue: Option<&Path>) -> bool {
|
||||
fn queue_new_entry(
|
||||
&mut self,
|
||||
filename_new_queue: &Path,
|
||||
filename_orig_queue: Option<&Path>,
|
||||
) -> bool {
|
||||
match self.queue_new_entry(filename_new_queue, filename_orig_queue) {
|
||||
Ok(r) => r,
|
||||
Err(e) => {
|
||||
Self::handle_error(e);
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -698,16 +725,14 @@ mod default_mutator_describe {
|
||||
fn truncate_str_unicode_safe(s: &str, max_len: usize) -> &str {
|
||||
if s.len() <= max_len {
|
||||
s
|
||||
} else if let Some((last_index, _)) = s
|
||||
.char_indices()
|
||||
.take_while(|(index, _)| *index <= max_len)
|
||||
.last()
|
||||
{
|
||||
&s[..last_index]
|
||||
} else {
|
||||
if let Some((last_index, _)) = s
|
||||
.char_indices()
|
||||
.take_while(|(index, _)| *index <= max_len)
|
||||
.last()
|
||||
{
|
||||
&s[..last_index]
|
||||
} else {
|
||||
""
|
||||
}
|
||||
""
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,6 +1,9 @@
|
||||
# custum mutator: symcc
|
||||
|
||||
This uses the excellent symcc to find new paths into the target.
|
||||
This uses the symcc to find new paths into the target.
|
||||
|
||||
Note that this is a just a proof of concept example! It is better to use
|
||||
the fuzzing helpers of symcc, symqemu, Fuzzolic, etc. rather than this.
|
||||
|
||||
To use this custom mutator follow the steps in the symcc repository
|
||||
[https://github.com/eurecom-s3/symcc/](https://github.com/eurecom-s3/symcc/)
|
||||
|
@ -129,7 +129,7 @@ uint8_t afl_custom_queue_new_entry(my_mutator_t * data,
|
||||
|
||||
int pid = fork();
|
||||
|
||||
if (pid == -1) return;
|
||||
if (pid == -1) return 0;
|
||||
|
||||
if (pid) {
|
||||
|
||||
@ -147,7 +147,7 @@ uint8_t afl_custom_queue_new_entry(my_mutator_t * data,
|
||||
if (r <= 0) {
|
||||
|
||||
close(pipefd[1]);
|
||||
return;
|
||||
return 0;
|
||||
|
||||
}
|
||||
|
||||
|
@ -1,20 +1,18 @@
|
||||
# AFL dictionaries
|
||||
# AFL++ dictionaries
|
||||
|
||||
(See [../README.md](../README.md) for the general instruction manual.)
|
||||
For the general instruction manual, see [docs/README.md](../docs/README.md).
|
||||
|
||||
This subdirectory contains a set of dictionaries that can be used in
|
||||
conjunction with the -x option to allow the fuzzer to effortlessly explore the
|
||||
grammar of some of the more verbose data formats or languages. The basic
|
||||
principle behind the operation of fuzzer dictionaries is outlined in section 10
|
||||
of the "main" README.md for the project.
|
||||
This subdirectory contains a set of dictionaries that can be used in conjunction
|
||||
with the -x option to allow the fuzzer to effortlessly explore the grammar of
|
||||
some of the more verbose data formats or languages.
|
||||
|
||||
These sets were done by Michal Zalewski, various contributors, and imported
|
||||
from oss-fuzz, go-fuzz and libfuzzer.
|
||||
These sets were done by Michal Zalewski, various contributors, and imported from
|
||||
oss-fuzz, go-fuzz and libfuzzer.
|
||||
|
||||
Custom dictionaries can be added at will. They should consist of a
|
||||
reasonably-sized set of rudimentary syntax units that the fuzzer will then try
|
||||
to clobber together in various ways. Snippets between 2 and 16 bytes are
|
||||
usually the sweet spot.
|
||||
to clobber together in various ways. Snippets between 2 and 16 bytes are usually
|
||||
the sweet spot.
|
||||
|
||||
Custom dictionaries can be created in two ways:
|
||||
|
||||
@ -36,9 +34,9 @@ In the file mode, every name field can be optionally followed by @<num>, e.g.:
|
||||
`keyword_foo@1 = "foo"`
|
||||
|
||||
Such entries will be loaded only if the requested dictionary level is equal or
|
||||
higher than this number. The default level is zero; a higher value can be set
|
||||
by appending @<num> to the dictionary file name, like so:
|
||||
higher than this number. The default level is zero; a higher value can be set by
|
||||
appending @<num> to the dictionary file name, like so:
|
||||
|
||||
`-x path/to/dictionary.dct@2`
|
||||
|
||||
Good examples of dictionaries can be found in xml.dict and png.dict.
|
||||
Good examples of dictionaries can be found in xml.dict and png.dict.
|
@ -1,17 +1,142 @@
|
||||
# Changelog
|
||||
|
||||
This is the list of all noteworthy changes made in every public release of
|
||||
the tool. See README.md for the general instruction manual.
|
||||
This is the list of all noteworthy changes made in every public
|
||||
release of the tool. See README.md for the general instruction manual.
|
||||
|
||||
## Staying informed
|
||||
|
||||
Want to stay in the loop on major new features? Join our mailing list by
|
||||
sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
|
||||
### Version ++3.15a (dev)
|
||||
- added the very good grammar mutator "GramaTron" to the custom_mutators
|
||||
### Version ++4.02c (release)
|
||||
- afl-cc:
|
||||
- important fix for the default pcguard mode when LLVM IR vector
|
||||
selects are produced, thanks to @juppytt for reporting!
|
||||
- gcc_plugin:
|
||||
- Adacore submitted CMPLOG support to the gcc_plugin! :-)
|
||||
- llvm_mode:
|
||||
- laf cmp splitting fixed for more comparison types
|
||||
- frida_mode:
|
||||
- now works on Android!
|
||||
- afl-fuzz:
|
||||
- change post_process hook to allow returning NULL and 0 length to
|
||||
tell afl-fuzz to skip this mutated input
|
||||
|
||||
|
||||
### Version ++4.01c (release)
|
||||
- fixed */build_...sh scripts to work outside of git
|
||||
- new custom_mutator: libafl with token fuzzing :)
|
||||
- afl-fuzz:
|
||||
- when you just want to compile once and set CMPLOG, then just
|
||||
set -c 0 to tell afl-fuzz that the fuzzing binary is also for
|
||||
CMPLOG.
|
||||
- new commandline options -g/G to set min/max length of generated
|
||||
fuzz inputs
|
||||
- you can set the time for syncing to other fuzzer now with
|
||||
AFL_SYNC_TIME
|
||||
- reintroduced AFL_PERSISTENT and AFL_DEFER_FORKSRV to allow
|
||||
persistent mode and manual forkserver support if these are not
|
||||
in the target binary (e.g. are in a shared library)
|
||||
- add AFL_EARLY_FORKSERVER to install the forkserver as earliest as
|
||||
possible in the target (for afl-gcc-fast/afl-clang-fast/
|
||||
afl-clang-lto)
|
||||
- "saved timeouts" was wrong information, timeouts are still thrown
|
||||
away by default even if they have new coverage (hangs are always
|
||||
kept), unless AFL_KEEP_TIMEOUTS are set
|
||||
- AFL never implemented auto token inserts (but user token inserts,
|
||||
user token overwrite and auto token overwrite), added now!
|
||||
- fixed a mutation type in havoc mode
|
||||
- Mopt fix to always select the correct algorithm
|
||||
- fix effector map calculation (deterministic mode)
|
||||
- fix custom mutator post_process functionality
|
||||
- document and auto-activate pizza mode on condition
|
||||
- afl-cc:
|
||||
- due a bug in lld of llvm 15 LTO instrumentation wont work atm :-(
|
||||
- converted all passed to use the new llvm pass manager for llvm 11+
|
||||
- AFL++ PCGUARD mode is not available for 10.0.1 anymore (11+ only)
|
||||
- trying to stay on top on all these #$&§!! changes in llvm 15 ...
|
||||
- frida_mode:
|
||||
- update to new frida release, handles now c++ throw/catch
|
||||
- unicorn_mode:
|
||||
- update unicorn engine, fix C example
|
||||
- utils:
|
||||
- removed optimin because it looses coverage due to a bug and is
|
||||
unmaintained :-(
|
||||
|
||||
|
||||
### Version ++4.00c (release)
|
||||
- complete documentation restructuring, made possible by Google Season
|
||||
of Docs :) thank you Jana!
|
||||
- we renamed several UI and fuzzer_stat entries to be more precise,
|
||||
e.g. "unique crashes" -> "saved crashes", "total paths" ->
|
||||
"corpus count", "current path" -> "current item".
|
||||
This might need changing custom scripting!
|
||||
- Nyx mode (full system emulation with snapshot capability) has been
|
||||
added - thanks to @schumilo and @eqv!
|
||||
- unicorn_mode:
|
||||
- Moved to unicorn2! by Ziqiao Kong (@lazymio)
|
||||
- Faster, more accurate emulation (newer QEMU base), risc-v support
|
||||
- removed indirections in rust callbacks
|
||||
- new binary-only fuzzing mode: coresight_mode for aarch64 CPUs :)
|
||||
thanks to RICSecLab submitting!
|
||||
- if instrumented libaries are dlopen()'ed after the forkserver you
|
||||
will now see a crash. Before you would have colliding coverage.
|
||||
We changed this to force fixing a broken setup rather then allowing
|
||||
ineffective fuzzing.
|
||||
See docs/best_practices.md how to fix such setups.
|
||||
- afl-fuzz:
|
||||
- cmplog binaries will need to be recompiled for this version
|
||||
(it is better!)
|
||||
- fix a regression introduced in 3.10 that resulted in less
|
||||
coverage being detected. thanks to Collin May for reporting!
|
||||
- ensure all spawned targets are killed on exit
|
||||
- added AFL_IGNORE_PROBLEMS, plus checks to identify and abort on
|
||||
incorrect LTO usage setups and enhanced the READMEs for better
|
||||
information on how to deal with instrumenting libraries
|
||||
- fix -n dumb mode (nobody should use this mode though)
|
||||
- fix stability issue with LTO and cmplog
|
||||
- better banner
|
||||
- more effective cmplog mode
|
||||
- more often update the UI when in input2stage mode
|
||||
- qemu_mode/unicorn_mode: fixed OOB write when using libcompcov,
|
||||
thanks to kotee4ko for reporting!
|
||||
- frida_mode:
|
||||
- better performance, bug fixes
|
||||
- David Carlier added Android support :)
|
||||
- afl-showmap, afl-tmin and afl-analyze:
|
||||
- honor persistent mode for more speed. thanks to dloffre-snl
|
||||
for reporting!
|
||||
- fix bug where targets are not killed on timeouts
|
||||
- moved hidden afl-showmap -A option to -H to be used for
|
||||
coresight_mode
|
||||
- Prevent accidentally killing non-afl/fuzz services when aborting
|
||||
afl-showmap and other tools.
|
||||
- afl-cc:
|
||||
- detect overflow reads on initial input buffer for asan
|
||||
- new cmplog mode (incompatible with older afl++ versions)
|
||||
- support llvm IR select instrumentation for default PCGUARD and LTO
|
||||
- fix for shared linking on MacOS
|
||||
- better selective instrumentation AFL_LLVM_{ALLOW|DENY}LIST
|
||||
on filename matching (requires llvm 11 or newer)
|
||||
- fixed a potential crash in targets for LAF string handling
|
||||
- fixed a bad assert in LAF split switches
|
||||
- added AFL_USE_TSAN thread sanitizer support
|
||||
- llvm and LTO mode modified to work with new llvm 14-dev (again.)
|
||||
- fix for AFL_REAL_LD
|
||||
- more -z defs filtering
|
||||
- make -v without options work
|
||||
- added the very good grammar mutator "GramaTron" to the
|
||||
custom_mutators
|
||||
- added optimin, a faster and better corpus minimizer by
|
||||
Adrian Herrera. Thank you!
|
||||
- added afl-persistent-config script to set perform permanent system
|
||||
configuration settings for fuzzing, for Linux and Macos.
|
||||
thanks to jhertz!
|
||||
- added xml, curl & exotic string functions to llvm dictionary feature
|
||||
- fix AFL_PRELOAD issues on MacOS
|
||||
- removed utils/afl_frida because frida_mode/ is now so much better
|
||||
- added uninstall target to makefile (todo: update new readme!)
|
||||
|
||||
### Version ++3.14c (release)
|
||||
- afl-fuzz:
|
||||
- fix -F when a '/' was part of the parameter
|
||||
@ -30,7 +155,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- Fix to instrument global namespace functions in c++
|
||||
- Fix for llvm 13
|
||||
- support partial linking
|
||||
- do honor AFL_LLVM_{ALLOW/DENY}LIST for LTO autodictionary and DICT2FILE
|
||||
- do honor AFL_LLVM_{ALLOW/DENY}LIST for LTO autodictionary andDICT2FILE
|
||||
- We do support llvm versions from 3.8 to 5.0 again
|
||||
- frida_mode:
|
||||
- several fixes for cmplog
|
||||
@ -74,7 +199,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- on a crashing seed potentially the wrong input was disabled
|
||||
- added AFL_EXIT_ON_SEED_ISSUES env that will exit if a seed in
|
||||
-i dir crashes the target or results in a timeout. By default
|
||||
afl++ ignores these and uses them for splicing instead.
|
||||
AFL++ ignores these and uses them for splicing instead.
|
||||
- added AFL_EXIT_ON_TIME env that will make afl-fuzz exit fuzzing
|
||||
after no new paths have been found for n seconds
|
||||
- when AFL_FAST_CAL is set a variable path will now be calibrated
|
||||
@ -228,7 +353,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- Updated utils/afl_frida to be 5% faster, 7% on x86_x64
|
||||
- Added `AFL_KILL_SIGNAL` env variable (thanks @v-p-b)
|
||||
- @Edznux added a nice documentation on how to use rpc.statsd with
|
||||
afl++ in docs/rpc_statsd.md, thanks!
|
||||
AFL++ in docs/rpc_statsd.md, thanks!
|
||||
|
||||
### Version ++3.00c (release)
|
||||
- llvm_mode/ and gcc_plugin/ moved to instrumentation/
|
||||
@ -284,7 +409,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- custom mutators
|
||||
- added a new custom mutator: symcc -> https://github.com/eurecom-s3/symcc/
|
||||
- added a new custom mutator: libfuzzer that integrates libfuzzer mutations
|
||||
- Our afl++ Grammar-Mutator is now better integrated into custom_mutators/
|
||||
- Our AFL++ Grammar-Mutator is now better integrated into custom_mutators/
|
||||
- added INTROSPECTION support for custom modules
|
||||
- python fuzz function was not optional, fixed
|
||||
- some python mutator speed improvements
|
||||
@ -295,7 +420,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
|
||||
|
||||
### Version ++2.68c (release)
|
||||
- added the GSoC excellent afl++ grammar mutator by Shengtuo to our
|
||||
- added the GSoC excellent AFL++ grammar mutator by Shengtuo to our
|
||||
custom_mutators/ (see custom_mutators/README.md) - or get it here:
|
||||
https://github.com/AFLplusplus/Grammar-Mutator
|
||||
- a few QOL changes for Apple and its outdated gmake
|
||||
@ -318,12 +443,12 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- llvm_mode:
|
||||
- ported SanCov to LTO, and made it the default for LTO. better
|
||||
instrumentation locations
|
||||
- Further llvm 12 support (fast moving target like afl++ :-) )
|
||||
- Further llvm 12 support (fast moving target like AFL++ :-) )
|
||||
- deprecated LLVM SKIPSINGLEBLOCK env environment
|
||||
|
||||
|
||||
### Version ++2.67c (release)
|
||||
- Support for improved afl++ snapshot module:
|
||||
- Support for improved AFL++ snapshot module:
|
||||
https://github.com/AFLplusplus/AFL-Snapshot-LKM
|
||||
- Due to the instrumentation needing more memory, the initial memory sizes
|
||||
for -m have been increased
|
||||
@ -425,7 +550,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
files/stdin) - 10-100% performance increase
|
||||
- General support for 64 bit PowerPC, RiscV, Sparc etc.
|
||||
- fix afl-cmin.bash
|
||||
- slightly better performance compilation options for afl++ and targets
|
||||
- slightly better performance compilation options for AFL++ and targets
|
||||
- fixed afl-gcc/afl-as that could break on fast systems reusing pids in
|
||||
the same second
|
||||
- added lots of dictionaries from oss-fuzz, go-fuzz and Jakub Wilk
|
||||
@ -438,7 +563,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- afl-fuzz:
|
||||
- AFL_MAP_SIZE was not working correctly
|
||||
- better python detection
|
||||
- an old, old bug in afl that would show negative stability in rare
|
||||
- an old, old bug in AFL that would show negative stability in rare
|
||||
circumstances is now hopefully fixed
|
||||
- AFL_POST_LIBRARY was deprecated, use AFL_CUSTOM_MUTATOR_LIBRARY
|
||||
instead (see docs/custom_mutators.md)
|
||||
@ -497,8 +622,8 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- extended forkserver: map_size and more information is communicated to
|
||||
afl-fuzz (and afl-fuzz acts accordingly)
|
||||
- new environment variable: AFL_MAP_SIZE to specify the size of the shared map
|
||||
- if AFL_CC/AFL_CXX is set but empty afl compilers did fail, fixed
|
||||
(this bug is in vanilla afl too)
|
||||
- if AFL_CC/AFL_CXX is set but empty AFL compilers did fail, fixed
|
||||
(this bug is in vanilla AFL too)
|
||||
- added NO_PYTHON flag to disable python support when building afl-fuzz
|
||||
- more refactoring
|
||||
|
||||
@ -512,7 +637,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- all:
|
||||
- big code changes to make afl-fuzz thread-safe so afl-fuzz can spawn
|
||||
multiple fuzzing threads in the future or even become a library
|
||||
- afl basic tools now report on the environment variables picked up
|
||||
- AFL basic tools now report on the environment variables picked up
|
||||
- more tools get environment variable usage info in the help output
|
||||
- force all output to stdout (some OK/SAY/WARN messages were sent to
|
||||
stdout, some to stderr)
|
||||
@ -661,7 +786,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- qemu and unicorn download scripts now try to download until the full
|
||||
download succeeded. f*ckin travis fails downloading 40% of the time!
|
||||
- more support for Android (please test!)
|
||||
- added the few Android stuff we didnt have already from Google afl repository
|
||||
- added the few Android stuff we didnt have already from Google AFL repository
|
||||
- removed unnecessary warnings
|
||||
|
||||
|
||||
@ -709,7 +834,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
|
||||
- big code refactoring:
|
||||
* all includes are now in include/
|
||||
* all afl sources are now in src/ - see src/README.md
|
||||
* all AFL sources are now in src/ - see src/README.md
|
||||
* afl-fuzz was split up in various individual files for including
|
||||
functionality in other programs (e.g. forkserver, memory map, etc.)
|
||||
for better readability.
|
||||
@ -725,7 +850,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- fix building on *BSD (thanks to tobias.kortkamp for the patch)
|
||||
- fix for a few features to support different map sized than 2^16
|
||||
- afl-showmap: new option -r now shows the real values in the buckets (stock
|
||||
afl never did), plus shows tuple content summary information now
|
||||
AFL never did), plus shows tuple content summary information now
|
||||
- small docu updates
|
||||
- NeverZero counters for QEMU
|
||||
- NeverZero counters for Unicorn
|
||||
@ -768,7 +893,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
debugging
|
||||
- added -V time and -E execs option to better comparison runs, runs afl-fuzz
|
||||
for a specific time/executions.
|
||||
- added a -s seed switch to allow afl run with a fixed initial
|
||||
- added a -s seed switch to allow AFL run with a fixed initial
|
||||
seed that is not updated. This is good for performance and path discovery
|
||||
tests as the random numbers are deterministic then
|
||||
- llvm_mode LAF_... env variables can now be specified as AFL_LLVM_LAF_...
|
||||
@ -1516,7 +1641,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- Fixed a bug with installed copies of AFL trying to use QEMU mode. Spotted
|
||||
by G.M. Lime.
|
||||
|
||||
- Added last path / crash / hang times to fuzzer_stats, suggested by
|
||||
- Added last find / crash / hang times to fuzzer_stats, suggested by
|
||||
Richard Hipp.
|
||||
|
||||
- Fixed a typo, thanks to Jakub Wilk.
|
||||
@ -1589,7 +1714,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
### Version 1.63b:
|
||||
|
||||
- Updated cgroups_asan/ with a new version from Sam, made a couple changes
|
||||
to streamline it and keep parallel afl instances in separate groups.
|
||||
to streamline it and keep parallel AFL instances in separate groups.
|
||||
|
||||
- Fixed typos, thanks to Jakub Wilk.
|
||||
|
||||
@ -2387,7 +2512,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
|
||||
- Added AFL_KEEP_ASSEMBLY for easier troubleshooting.
|
||||
|
||||
- Added an override for AFL_USE_ASAN if set at afl compile time. Requested by
|
||||
- Added an override for AFL_USE_ASAN if set at AFL compile time. Requested by
|
||||
Hanno Boeck.
|
||||
|
||||
### Version 0.79b:
|
||||
@ -2728,7 +2853,7 @@ sending a mail to <afl-users+subscribe@googlegroups.com>.
|
||||
- Updated the documentation and added notes_for_asan.txt. Based on feedback
|
||||
from Hanno Boeck, Ben Laurie, and others.
|
||||
|
||||
- Moved the project to http://lcamtuf.coredump.cx/afl/.
|
||||
- Moved the project to https://lcamtuf.coredump.cx/afl/.
|
||||
|
||||
### Version 0.46b:
|
||||
|
||||
|
410
docs/FAQ.md
@ -1,243 +1,257 @@
|
||||
# Frequently asked questions about afl++
|
||||
|
||||
## Contents
|
||||
|
||||
* [What is the difference between afl and afl++?](#what-is-the-difference-between-afl-and-afl)
|
||||
* [I got a weird compile error from clang](#i-got-a-weird-compile-error-from-clang)
|
||||
* [How to improve the fuzzing speed?](#how-to-improve-the-fuzzing-speed)
|
||||
* [How do I fuzz a network service?](#how-do-i-fuzz-a-network-service)
|
||||
* [How do I fuzz a GUI program?](#how-do-i-fuzz-a-gui-program)
|
||||
* [What is an edge?](#what-is-an-edge)
|
||||
* [Why is my stability below 100%?](#why-is-my-stability-below-100)
|
||||
* [How can I improve the stability value?](#how-can-i-improve-the-stability-value)
|
||||
# Frequently asked questions (FAQ)
|
||||
|
||||
If you find an interesting or important question missing, submit it via
|
||||
[https://github.com/AFLplusplus/AFLplusplus/issues](https://github.com/AFLplusplus/AFLplusplus/issues)
|
||||
[https://github.com/AFLplusplus/AFLplusplus/discussions](https://github.com/AFLplusplus/AFLplusplus/discussions).
|
||||
|
||||
## What is the difference between afl and afl++?
|
||||
## General
|
||||
|
||||
American Fuzzy Lop (AFL) was developed by Michał "lcamtuf" Zalewski starting in
|
||||
2013/2014, and when he left Google end of 2017 he stopped developing it.
|
||||
<details>
|
||||
<summary id="what-is-the-difference-between-afl-and-aflplusplus">What is the difference between AFL and AFL++?</summary><p>
|
||||
|
||||
At the end of 2019 the Google fuzzing team took over maintenance of AFL, however
|
||||
it is only accepting PRs from the community and is not developing enhancements
|
||||
anymore.
|
||||
AFL++ is a superior fork to Google's AFL - more speed, more and better
|
||||
mutations, more and better instrumentation, custom module support, etc.
|
||||
|
||||
In the second quarter of 2019, 1 1/2 year later when no further development of
|
||||
AFL had happened and it became clear there would none be coming, afl++
|
||||
was born, where initially community patches were collected and applied
|
||||
for bug fixes and enhancements. Then from various AFL spin-offs - mostly academic
|
||||
research - features were integrated. This already resulted in a much advanced
|
||||
AFL.
|
||||
American Fuzzy Lop (AFL) was developed by Michał "lcamtuf" Zalewski starting
|
||||
in 2013/2014, and when he left Google end of 2017 he stopped developing it.
|
||||
|
||||
Until the end of 2019 the afl++ team had grown to four active developers which
|
||||
then implemented their own research and features, making it now by far the most
|
||||
flexible and feature rich guided fuzzer available as open source.
|
||||
And in independent fuzzing benchmarks it is one of the best fuzzers available,
|
||||
e.g. [Fuzzbench Report](https://www.fuzzbench.com/reports/2020-08-03/index.html)
|
||||
At the end of 2019, the Google fuzzing team took over maintenance of AFL,
|
||||
however, it is only accepting PRs from the community and is not developing
|
||||
enhancements anymore.
|
||||
|
||||
## I got a weird compile error from clang
|
||||
In the second quarter of 2019, 1 1/2 years later, when no further development
|
||||
of AFL had happened and it became clear there would none be coming, AFL++ was
|
||||
born, where initially community patches were collected and applied for bug
|
||||
fixes and enhancements. Then from various AFL spin-offs - mostly academic
|
||||
research - features were integrated. This already resulted in a much advanced
|
||||
AFL.
|
||||
|
||||
If you see this kind of error when trying to instrument a target with afl-cc/
|
||||
afl-clang-fast/afl-clang-lto:
|
||||
```
|
||||
/prg/tmp/llvm-project/build/bin/clang-13: symbol lookup error: /usr/local/bin/../lib/afl//cmplog-instructions-pass.so: undefined symbol: _ZNK4llvm8TypeSizecvmEv
|
||||
clang-13: error: unable to execute command: No such file or directory
|
||||
clang-13: error: clang frontend command failed due to signal (use -v to see invocation)
|
||||
clang version 13.0.0 (https://github.com/llvm/llvm-project 1d7cf550721c51030144f3cd295c5789d51c4aad)
|
||||
Target: x86_64-unknown-linux-gnu
|
||||
Thread model: posix
|
||||
InstalledDir: /prg/tmp/llvm-project/build/bin
|
||||
clang-13: note: diagnostic msg:
|
||||
********************
|
||||
```
|
||||
Then this means that your OS updated the clang installation from an upgrade
|
||||
package and because of that the afl++ llvm plugins do not match anymore.
|
||||
Until the end of 2019, the AFL++ team had grown to four active developers
|
||||
which then implemented their own research and features, making it now by far
|
||||
the most flexible and feature rich guided fuzzer available as open source. And
|
||||
in independent fuzzing benchmarks it is one of the best fuzzers available,
|
||||
e.g., [Fuzzbench
|
||||
Report](https://www.fuzzbench.com/reports/2020-08-03/index.html).
|
||||
</p></details>
|
||||
|
||||
Solution: `git pull ; make clean install` of afl++
|
||||
<details>
|
||||
<summary id="is-afl-a-whitebox-graybox-or-blackbox-fuzzer">Is AFL++ a whitebox, graybox, or blackbox fuzzer?</summary><p>
|
||||
|
||||
## How to improve the fuzzing speed?
|
||||
The definition of the terms whitebox, graybox, and blackbox fuzzing varies
|
||||
from one source to another. For example, "graybox fuzzing" could mean
|
||||
binary-only or source code fuzzing, or something completely different.
|
||||
Therefore, we try to avoid them.
|
||||
|
||||
1. Use [llvm_mode](docs/llvm_mode/README.md): afl-clang-lto (llvm >= 11) or afl-clang-fast (llvm >= 9 recommended)
|
||||
2. Use [persistent mode](llvm_mode/README.persistent_mode.md) (x2-x20 speed increase)
|
||||
3. Use the [afl++ snapshot module](https://github.com/AFLplusplus/AFL-Snapshot-LKM) (x2 speed increase)
|
||||
4. If you do not use shmem persistent mode, use `AFL_TMPDIR` to put the input file directory on a tempfs location, see [docs/env_variables.md](docs/env_variables.md)
|
||||
5. Improve Linux kernel performance: modify `/etc/default/grub`, set `GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then `update-grub` and `reboot` (warning: makes the system less secure)
|
||||
6. Running on an `ext2` filesystem with `noatime` mount option will be a bit faster than on any other journaling filesystem
|
||||
7. Use your cores! [README.md:3.b) Using multiple cores/threads](../README.md#b-using-multiple-coresthreads)
|
||||
[The Fuzzing Book](https://www.fuzzingbook.org/html/GreyboxFuzzer.html#AFL:-An-Effective-Greybox-Fuzzer)
|
||||
describes the original AFL to be a graybox fuzzer. In that sense, AFL++ is
|
||||
also a graybox fuzzer.
|
||||
</p></details>
|
||||
|
||||
## How do I fuzz a network service?
|
||||
<details>
|
||||
<summary id="where-can-i-find-tutorials">Where can I find tutorials?</summary><p>
|
||||
|
||||
The short answer is - you cannot, at least not "out of the box".
|
||||
We compiled a list of tutorials and exercises, see
|
||||
[tutorials.md](tutorials.md).
|
||||
</p></details>
|
||||
|
||||
Using a network channel is inadequate for several reasons:
|
||||
- it has a slow-down of x10-20 on the fuzzing speed
|
||||
- it does not scale to fuzzing multiple instances easily,
|
||||
- instead of one initial data packet often a back-and-forth interplay of packets is needed for stateful protocols (which is totally unsupported by most coverage aware fuzzers).
|
||||
<details>
|
||||
<summary id="what-is-an-edge">What is an "edge"?</summary><p>
|
||||
|
||||
The established method to fuzz network services is to modify the source code
|
||||
to read from a file or stdin (fd 0) (or even faster via shared memory, combine
|
||||
this with persistent mode [llvm_mode/README.persistent_mode.md](llvm_mode/README.persistent_mode.md)
|
||||
and you have a performance gain of x10 instead of a performance loss of over
|
||||
x10 - that is a x100 difference!).
|
||||
A program contains `functions`, `functions` contain the compiled machine code.
|
||||
The compiled machine code in a `function` can be in a single or many `basic
|
||||
blocks`. A `basic block` is the **largest possible number of subsequent machine
|
||||
code instructions** that has **exactly one entry point** (which can be be entered by
|
||||
multiple other basic blocks) and runs linearly **without branching or jumping to
|
||||
other addresses** (except at the end).
|
||||
|
||||
If modifying the source is not an option (e.g. because you only have a binary
|
||||
and perform binary fuzzing) you can also use a shared library with AFL_PRELOAD
|
||||
to emulate the network. This is also much faster than the real network would be.
|
||||
See [utils/socket_fuzzing/](../utils/socket_fuzzing/).
|
||||
```
|
||||
function() {
|
||||
A:
|
||||
some
|
||||
code
|
||||
B:
|
||||
if (x) goto C; else goto D;
|
||||
C:
|
||||
some code
|
||||
goto E
|
||||
D:
|
||||
some code
|
||||
goto B
|
||||
E:
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
There is an outdated afl++ branch that implements networking if you are
|
||||
desperate though: [https://github.com/AFLplusplus/AFLplusplus/tree/networking](https://github.com/AFLplusplus/AFLplusplus/tree/networking) -
|
||||
however a better option is AFLnet ([https://github.com/aflnet/aflnet](https://github.com/aflnet/aflnet))
|
||||
which allows you to define network state with different type of data packets.
|
||||
Every code block between two jump locations is a `basic block`.
|
||||
|
||||
## How do I fuzz a GUI program?
|
||||
An `edge` is then the unique relationship between two directly connected
|
||||
`basic blocks` (from the code example above):
|
||||
|
||||
If the GUI program can read the fuzz data from a file (via the command line,
|
||||
a fixed location or via an environment variable) without needing any user
|
||||
interaction then it would be suitable for fuzzing.
|
||||
|
||||
Otherwise it is not possible without modifying the source code - which is a
|
||||
very good idea anyway as the GUI functionality is a huge CPU/time overhead
|
||||
for the fuzzing.
|
||||
|
||||
So create a new `main()` that just reads the test case and calls the
|
||||
functionality for processing the input that the GUI program is using.
|
||||
|
||||
## What is an "edge"?
|
||||
|
||||
A program contains `functions`, `functions` contain the compiled machine code.
|
||||
The compiled machine code in a `function` can be in a single or many `basic blocks`.
|
||||
A `basic block` is the largest possible number of subsequent machine code
|
||||
instructions that has exactly one entrypoint (which can be be entered by multiple other basic blocks)
|
||||
and runs linearly without branching or jumping to other addresses (except at the end).
|
||||
```
|
||||
function() {
|
||||
A:
|
||||
some
|
||||
code
|
||||
B:
|
||||
if (x) goto C; else goto D;
|
||||
C:
|
||||
some code
|
||||
goto E
|
||||
D:
|
||||
some code
|
||||
goto B
|
||||
E:
|
||||
return
|
||||
}
|
||||
```
|
||||
Every code block between two jump locations is a `basic block`.
|
||||
|
||||
An `edge` is then the unique relationship between two directly connected `basic blocks` (from the
|
||||
code example above):
|
||||
```
|
||||
Block A
|
||||
|
|
||||
```
|
||||
Block A
|
||||
|
|
||||
v
|
||||
Block B <------+
|
||||
/ \ |
|
||||
v v |
|
||||
Block C Block D --+
|
||||
\
|
||||
v
|
||||
Block B <------+
|
||||
/ \ |
|
||||
v v |
|
||||
Block C Block D --+
|
||||
\
|
||||
v
|
||||
Block E
|
||||
```
|
||||
Every line between two blocks is an `edge`.
|
||||
Note that a few basic block loop to itself, this too would be an edge.
|
||||
Block E
|
||||
```
|
||||
|
||||
## Why is my stability below 100%?
|
||||
Every line between two blocks is an `edge`. Note that a few basic block loop
|
||||
to itself, this too would be an edge.
|
||||
</p></details>
|
||||
|
||||
Stability is measured by how many percent of the edges in the target are
|
||||
"stable". Sending the same input again and again should take the exact same
|
||||
path through the target every time. If that is the case, the stability is 100%.
|
||||
## Targets
|
||||
|
||||
If however randomness happens, e.g. a thread reading other external data,
|
||||
reaction to timing, etc. then in some of the re-executions with the same data
|
||||
the edge coverage result will be different accross runs.
|
||||
Those edges that change are then flagged "unstable".
|
||||
<details>
|
||||
<summary id="how-can-i-fuzz-a-binary-only-target">How can I fuzz a binary-only target?</summary><p>
|
||||
|
||||
The more "unstable" edges, the more difficult for afl++ to identify valid new
|
||||
paths.
|
||||
AFL++ is a great fuzzer if you have the source code available.
|
||||
|
||||
A value above 90% is usually fine and a value above 80% is also still ok, and
|
||||
even a value above 20% can still result in successful finds of bugs.
|
||||
However, it is recommended that for values below 90% or 80% you should take
|
||||
countermeasures to improve stability.
|
||||
However, if there is only the binary program and no source code available,
|
||||
then the standard non-instrumented mode is not effective.
|
||||
|
||||
## How can I improve the stability value?
|
||||
To learn how these binaries can be fuzzed, read
|
||||
[fuzzing_binary-only_targets.md](fuzzing_binary-only_targets.md).
|
||||
</p></details>
|
||||
|
||||
For fuzzing a 100% stable target that covers all edges is the best case.
|
||||
A 90% stable target that covers all edges is however better than a 100% stable
|
||||
target that ignores 10% of the edges.
|
||||
<details>
|
||||
<summary id="how-can-i-fuzz-a-network-service">How can I fuzz a network service?</summary><p>
|
||||
|
||||
With instability you basically have a partial coverage loss on an edge, with
|
||||
ignored functions you have a full loss on that edges.
|
||||
The short answer is - you cannot, at least not "out of the box".
|
||||
|
||||
There are functions that are unstable, but also provide value to coverage, eg
|
||||
init functions that use fuzz data as input for example.
|
||||
If however a function that has nothing to do with the input data is the
|
||||
source of instability, e.g. checking jitter, or is a hash map function etc.
|
||||
then it should not be instrumented.
|
||||
For more information on fuzzing network services, see
|
||||
[best_practices.md#fuzzing-a-network-service](best_practices.md#fuzzing-a-network-service).
|
||||
</p></details>
|
||||
|
||||
To be able to exclude these functions (based on AFL++'s measured stability)
|
||||
the following process will allow to identify functions with variable edges.
|
||||
<details>
|
||||
<summary id="how-can-i-fuzz-a-gui-program">How can I fuzz a GUI program?</summary><p>
|
||||
|
||||
Four steps are required to do this and it also requires quite some knowledge
|
||||
of coding and/or disassembly and is effectively possible only with
|
||||
afl-clang-fast PCGUARD and afl-clang-lto LTO instrumentation.
|
||||
Not all GUI programs are suitable for fuzzing. If the GUI program can read the
|
||||
fuzz data from a file without needing any user interaction, then it would be
|
||||
suitable for fuzzing.
|
||||
|
||||
1. First step: Instrument to be able to find the responsible function(s).
|
||||
For more information on fuzzing GUI programs, see
|
||||
[best_practices.md#fuzzing-a-gui-program](best_practices.md#fuzzing-a-gui-program).
|
||||
</p></details>
|
||||
|
||||
a) For LTO instrumented binaries this can be documented during compile
|
||||
time, just set `export AFL_LLVM_DOCUMENT_IDS=/path/to/a/file`.
|
||||
This file will have one assigned edge ID and the corresponding
|
||||
function per line.
|
||||
## Performance
|
||||
|
||||
b) For PCGUARD instrumented binaries it is much more difficult. Here you
|
||||
can either modify the __sanitizer_cov_trace_pc_guard function in
|
||||
llvm_mode/afl-llvm-rt.o.c to write a backtrace to a file if the ID in
|
||||
__afl_area_ptr[*guard] is one of the unstable edge IDs.
|
||||
(Example code is already there).
|
||||
Then recompile and reinstall llvm_mode and rebuild your target.
|
||||
Run the recompiled target with afl-fuzz for a while and then check the
|
||||
file that you wrote with the backtrace information.
|
||||
Alternatively you can use `gdb` to hook __sanitizer_cov_trace_pc_guard_init
|
||||
on start, check to which memory address the edge ID value is written
|
||||
and set a write breakpoint to that address (`watch 0x.....`).
|
||||
<details>
|
||||
<summary id="what-makes-a-good-performance">What makes a good performance?</summary><p>
|
||||
|
||||
c) in all other instrumentation types this is not possible. So just
|
||||
recompile with the two mentioned above. This is just for
|
||||
identifying the functions that have unstable edges.
|
||||
Good performance generally means "making the fuzzing results better". This can
|
||||
be influenced by various factors, for example, speed (finding lots of paths
|
||||
quickly) or thoroughness (working with decreased speed, but finding better
|
||||
mutations).
|
||||
</p></details>
|
||||
|
||||
2. Second step: Identify which edge ID numbers are unstable
|
||||
<details>
|
||||
<summary id="how-can-i-improve-the-fuzzing-speed">How can I improve the fuzzing speed?</summary><p>
|
||||
|
||||
run the target with `export AFL_DEBUG=1` for a few minutes then terminate.
|
||||
The out/fuzzer_stats file will then show the edge IDs that were identified
|
||||
as unstable in the `var_bytes` entry. You can match these numbers
|
||||
directly to the data you created in the first step.
|
||||
Now you know which functions are responsible for the instability
|
||||
There are a few things you can do to improve the fuzzing speed, see
|
||||
[best_practices.md#improving-speed](best_practices.md#improving-speed).
|
||||
</p></details>
|
||||
|
||||
3. Third step: create a text file with the filenames/functions
|
||||
<details>
|
||||
<summary id="why-is-my-stability-below-100percent">Why is my stability below 100%?</summary><p>
|
||||
|
||||
Identify which source code files contain the functions that you need to
|
||||
remove from instrumentation, or just specify the functions you want to
|
||||
skip for instrumentation. Note that optimization might inline functions!
|
||||
Stability is measured by how many percent of the edges in the target are
|
||||
"stable". Sending the same input again and again should take the exact same
|
||||
path through the target every time. If that is the case, the stability is
|
||||
100%.
|
||||
|
||||
Simply follow this document on how to do this: [llvm_mode/README.instrument_list.md](llvm_mode/README.instrument_list.md)
|
||||
If PCGUARD is used, then you need to follow this guide (needs llvm 12+!):
|
||||
[http://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation](http://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation)
|
||||
If, however, randomness happens, e.g., a thread reading other external data,
|
||||
reaction to timing, etc., then in some of the re-executions with the same data
|
||||
the edge coverage result will be different across runs. Those edges that
|
||||
change are then flagged "unstable".
|
||||
|
||||
Only exclude those functions from instrumentation that provide no value
|
||||
for coverage - that is if it does not process any fuzz data directly
|
||||
or indirectly (e.g. hash maps, thread management etc.).
|
||||
If however a function directly or indirectly handles fuzz data then you
|
||||
should not put the function in a deny instrumentation list and rather
|
||||
live with the instability it comes with.
|
||||
The more "unstable" edges there are, the harder it is for AFL++ to identify
|
||||
valid new paths.
|
||||
|
||||
4. Fourth step: recompile the target
|
||||
A value above 90% is usually fine and a value above 80% is also still ok, and
|
||||
even a value above 20% can still result in successful finds of bugs. However,
|
||||
it is recommended that for values below 90% or 80% you should take
|
||||
countermeasures to improve stability.
|
||||
|
||||
Recompile, fuzz it, be happy :)
|
||||
For more information on stability and how to improve the stability value, see
|
||||
[best_practices.md#improving-stability](best_practices.md#improving-stability).
|
||||
</p></details>
|
||||
|
||||
This link explains this process for [Fuzzbench](https://github.com/google/fuzzbench/issues/677)
|
||||
<details>
|
||||
<summary id="what-are-power-schedules">What are power schedules?</summary><p>
|
||||
|
||||
Not every item in our queue/corpus is the same, some are more interesting,
|
||||
others provide little value.
|
||||
A power schedule measures how "interesting" a value is, and depending on
|
||||
the calculated value spends more or less time mutating it.
|
||||
|
||||
AFL++ comes with several power schedules, initially ported from
|
||||
[AFLFast](https://github.com/mboehme/aflfast), however, modified to be more
|
||||
effective and several more modes added.
|
||||
|
||||
The most effective modes are `-p fast` (default) and `-p explore`.
|
||||
|
||||
If you fuzz with several parallel afl-fuzz instances, then it is beneficial
|
||||
to assign a different schedule to each instance, however the majority should
|
||||
be `fast` and `explore`.
|
||||
|
||||
It does not make sense to explain the details of the calculation and
|
||||
reasoning behind all of the schedules. If you are interested, read the source
|
||||
code and the AFLFast paper.
|
||||
</p></details>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
<details>
|
||||
<summary id="fatal-forkserver-is-already-up-but-an-instrumented-dlopen-library-loaded-afterwards">FATAL: forkserver is already up but an instrumented dlopen library loaded afterwards</summary><p>
|
||||
|
||||
It can happen that you see this error on startup when fuzzing a target:
|
||||
|
||||
```
|
||||
[-] FATAL: forkserver is already up, but an instrumented dlopen() library
|
||||
loaded afterwards. You must AFL_PRELOAD such libraries to be able
|
||||
to fuzz them or LD_PRELOAD to run outside of afl-fuzz.
|
||||
To ignore this set AFL_IGNORE_PROBLEMS=1.
|
||||
```
|
||||
|
||||
As the error describes, a dlopen() call is happening in the target that is
|
||||
loading an instrumented library after the forkserver is already in place. This
|
||||
is a problem for afl-fuzz because when the forkserver is started, we must know
|
||||
the map size already and it can't be changed later.
|
||||
|
||||
The best solution is to simply set `AFL_PRELOAD=foo.so` to the libraries that
|
||||
are dlopen'ed (e.g., use `strace` to see which), or to set a manual forkserver
|
||||
after the final dlopen().
|
||||
|
||||
If this is not a viable option, you can set `AFL_IGNORE_PROBLEMS=1` but then
|
||||
the existing map will be used also for the newly loaded libraries, which
|
||||
allows it to work, however, the efficiency of the fuzzing will be partially
|
||||
degraded.
|
||||
</p></details>
|
||||
|
||||
<details>
|
||||
<summary id="i-got-a-weird-compile-error-from-clang">I got a weird compile error from clang.</summary><p>
|
||||
|
||||
If you see this kind of error when trying to instrument a target with
|
||||
afl-cc/afl-clang-fast/afl-clang-lto:
|
||||
|
||||
```
|
||||
/prg/tmp/llvm-project/build/bin/clang-13: symbol lookup error: /usr/local/bin/../lib/afl//cmplog-instructions-pass.so: undefined symbol: _ZNK4llvm8TypeSizecvmEv
|
||||
clang-13: error: unable to execute command: No such file or directory
|
||||
clang-13: error: clang frontend command failed due to signal (use -v to see invocation)
|
||||
clang version 13.0.0 (https://github.com/llvm/llvm-project 1d7cf550721c51030144f3cd295c5789d51c4aad)
|
||||
Target: x86_64-unknown-linux-gnu
|
||||
Thread model: posix
|
||||
InstalledDir: /prg/tmp/llvm-project/build/bin
|
||||
clang-13: note: diagnostic msg:
|
||||
********************
|
||||
```
|
||||
|
||||
Then this means that your OS updated the clang installation from an upgrade
|
||||
package and because of that the AFL++ llvm plugins do not match anymore.
|
||||
|
||||
Solution: `git pull ; make clean install` of AFL++.
|
||||
</p></details>
|
||||
|
274
docs/INSTALL.md
@ -1,98 +1,141 @@
|
||||
# Installation instructions
|
||||
# Building and installing AFL++
|
||||
|
||||
This document provides basic installation instructions and discusses known
|
||||
issues for a variety of platforms. See README.md for the general instruction
|
||||
manual.
|
||||
## Linux on x86
|
||||
|
||||
## 1. Linux on x86
|
||||
---------------
|
||||
An easy way to install AFL++ with everything compiled is available via docker:
|
||||
You can use the [Dockerfile](../Dockerfile) (which has gcc-10 and clang-12 -
|
||||
hence afl-clang-lto is available) or just pull directly from the Docker Hub
|
||||
(for x86_64 and arm64):
|
||||
|
||||
This platform is expected to work well. Compile the program with:
|
||||
|
||||
```bash
|
||||
make
|
||||
```shell
|
||||
docker pull aflplusplus/aflplusplus:
|
||||
docker run -ti -v /location/of/your/target:/src aflplusplus/aflplusplus
|
||||
```
|
||||
|
||||
You can start using the fuzzer without installation, but it is also possible to
|
||||
install it with:
|
||||
This image is automatically generated when a push to the stable branch happens.
|
||||
You will find your target source code in `/src` in the container.
|
||||
|
||||
```bash
|
||||
Note: you can also pull `aflplusplus/aflplusplus:dev` which is the most current
|
||||
development state of AFL++.
|
||||
|
||||
If you want to build AFL++ yourself, you have many options. The easiest choice
|
||||
is to build and install everything:
|
||||
|
||||
NOTE: depending on your Debian/Ubuntu/Kali/... version release `-12` with
|
||||
whatever llvm version is available!
|
||||
|
||||
```shell
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential python3-dev automake cmake git flex bison libglib2.0-dev libpixman-1-dev python3-setuptools cargo libgtk-3-dev
|
||||
# try to install llvm 12 and install the distro default if that fails
|
||||
sudo apt-get install -y lld-12 llvm-12 llvm-12-dev clang-12 || sudo apt-get install -y lld llvm llvm-dev clang
|
||||
sudo apt-get install -y gcc-$(gcc --version|head -n1|sed 's/\..*//'|sed 's/.* //')-plugin-dev libstdc++-$(gcc --version|head -n1|sed 's/\..*//'|sed 's/.* //')-dev
|
||||
sudo apt-get install -y ninja-build # for QEMU mode
|
||||
git clone https://github.com/AFLplusplus/AFLplusplus
|
||||
cd AFLplusplus
|
||||
make distrib
|
||||
sudo make install
|
||||
```
|
||||
|
||||
There are no special dependencies to speak of; you will need GNU make and a
|
||||
working compiler (gcc or clang). Some of the optional scripts bundled with the
|
||||
program may depend on bash, gdb, and similar basic tools.
|
||||
It is recommended to install the newest available gcc, clang and llvm-dev
|
||||
possible in your distribution!
|
||||
|
||||
If you are using clang, please review README.llvm.md; the LLVM
|
||||
integration mode can offer substantial performance gains compared to the
|
||||
traditional approach.
|
||||
Note that `make distrib` also builds FRIDA mode, QEMU mode, unicorn_mode, and
|
||||
more. If you just want plain AFL++, then do `make all`. If you want some
|
||||
assisting tooling compiled but are not interested in binary-only targets, then
|
||||
instead choose:
|
||||
|
||||
Likewise, if you are using GCC, please review instrumentation/README.gcc_plugin.md.
|
||||
|
||||
You may have to change several settings to get optimal results (most notably,
|
||||
disable crash reporting utilities and switch to a different CPU governor), but
|
||||
afl-fuzz will guide you through that if necessary.
|
||||
|
||||
## 2. OpenBSD, FreeBSD, NetBSD on x86
|
||||
|
||||
Similarly to Linux, these platforms are expected to work well and are
|
||||
regularly tested. Compile everything with GNU make:
|
||||
|
||||
```bash
|
||||
gmake
|
||||
```shell
|
||||
make source-only
|
||||
```
|
||||
|
||||
Note that BSD make will *not* work; if you do not have gmake on your system,
|
||||
please install it first. As on Linux, you can use the fuzzer itself without
|
||||
installation, or install it with:
|
||||
These build targets exist:
|
||||
|
||||
```
|
||||
sudo gmake install
|
||||
* all: the main afl++ binaries and llvm/gcc instrumentation
|
||||
* binary-only: everything for binary-only fuzzing: frida_mode, nyx_mode,
|
||||
qemu_mode, frida_mode, unicorn_mode, coresight_mode, libdislocator,
|
||||
libtokencap
|
||||
* source-only: everything for source code fuzzing: nyx_mode, libdislocator,
|
||||
libtokencap
|
||||
* distrib: everything (for both binary-only and source code fuzzing)
|
||||
* man: creates simple man pages from the help option of the programs
|
||||
* install: installs everything you have compiled with the build options above
|
||||
* clean: cleans everything compiled, not downloads (unless not on a checkout)
|
||||
* deepclean: cleans everything including downloads
|
||||
* code-format: format the code, do this before you commit and send a PR please!
|
||||
* tests: runs test cases to ensure that all features are still working as they
|
||||
should
|
||||
* unit: perform unit tests (based on cmocka)
|
||||
* help: shows these build options
|
||||
|
||||
[Unless you are on Mac OS X](https://developer.apple.com/library/archive/qa/qa1118/_index.html),
|
||||
you can also build statically linked versions of the AFL++ binaries by passing
|
||||
the `STATIC=1` argument to make:
|
||||
|
||||
```shell
|
||||
make STATIC=1
|
||||
```
|
||||
|
||||
Keep in mind that if you are using csh as your shell, the syntax of some of the
|
||||
shell commands given in the README.md and other docs will be different.
|
||||
These build options exist:
|
||||
|
||||
The `llvm` requires a dynamically linked, fully-operational installation of
|
||||
clang. At least on FreeBSD, the clang binaries are static and do not include
|
||||
some of the essential tools, so if you want to make it work, you may need to
|
||||
follow the instructions in README.llvm.md.
|
||||
* STATIC - compile AFL++ static
|
||||
* ASAN_BUILD - compiles AFL++ with memory sanitizer for debug purposes
|
||||
* UBSAN_BUILD - compiles AFL++ tools with undefined behaviour sanitizer for
|
||||
debug purposes
|
||||
* DEBUG - no optimization, -ggdb3, all warnings and -Werror
|
||||
* PROFILING - compile afl-fuzz with profiling information
|
||||
* INTROSPECTION - compile afl-fuzz with mutation introspection
|
||||
* NO_PYTHON - disable python support
|
||||
* NO_SPLICING - disables splicing mutation in afl-fuzz, not recommended for
|
||||
normal fuzzing
|
||||
* NO_NYX - disable building nyx mode dependencies
|
||||
* NO_CORESIGHT - disable building coresight (arm64 only)
|
||||
* NO_UNICORN_ARM64 - disable building unicorn on arm64
|
||||
* AFL_NO_X86 - if compiling on non-intel/amd platforms
|
||||
* LLVM_CONFIG - if your distro doesn't use the standard name for llvm-config
|
||||
(e.g., Debian)
|
||||
|
||||
Beyond that, everything should work as advertised.
|
||||
e.g.: `make LLVM_CONFIG=llvm-config-14`
|
||||
|
||||
The QEMU mode is currently supported only on Linux. I think it's just a QEMU
|
||||
problem, I couldn't get a vanilla copy of user-mode emulation support working
|
||||
correctly on BSD at all.
|
||||
## MacOS X on x86 and arm64 (M1)
|
||||
|
||||
## 3. MacOS X on x86 and arm64 (M1)
|
||||
|
||||
MacOS X should work, but there are some gotchas due to the idiosyncrasies of
|
||||
the platform. On top of this, I have limited release testing capabilities
|
||||
and depend mostly on user feedback.
|
||||
MacOS has some gotchas due to the idiosyncrasies of the platform.
|
||||
|
||||
To build AFL, install llvm (and perhaps gcc) from brew and follow the general
|
||||
instructions for Linux. If possible avoid Xcode at all cost.
|
||||
|
||||
`brew install wget git make llvm`
|
||||
|
||||
Be sure to setup PATH to point to the correct clang binaries and use gmake, e.g.:
|
||||
instructions for Linux. If possible, avoid Xcode at all cost.
|
||||
|
||||
```shell
|
||||
brew install wget git make cmake llvm gdb coreutils
|
||||
```
|
||||
export PATH="/usr/local/Cellar/llvm/12.0.1/bin/:$PATH"
|
||||
|
||||
Be sure to setup `PATH` to point to the correct clang binaries and use the
|
||||
freshly installed clang, clang++, llvm-config, gmake and coreutils, e.g.:
|
||||
|
||||
```shell
|
||||
# Depending on your MacOS system + brew version it is either
|
||||
export PATH="/opt/homebrew/opt/llvm/bin:$PATH"
|
||||
# or
|
||||
export PATH="/usr/local/opt/llvm/bin:$PATH"
|
||||
# you can check with "brew info llvm"
|
||||
|
||||
export PATH="/usr/local/opt/coreutils/libexec/gnubin:/usr/local/bin:$PATH"
|
||||
export CC=clang
|
||||
export CXX=clang++
|
||||
gmake
|
||||
cd frida_mode
|
||||
gmake
|
||||
cd ..
|
||||
gmake install
|
||||
sudo gmake install
|
||||
```
|
||||
|
||||
afl-gcc will fail unless you have GCC installed, but that is using outdated
|
||||
instrumentation anyway. You don't want that.
|
||||
Note that afl-clang-lto, afl-gcc-fast and qemu_mode are not working on MacOS.
|
||||
`afl-gcc` will fail unless you have GCC installed, but that is using outdated
|
||||
instrumentation anyway. `afl-clang` might fail too depending on your PATH setup.
|
||||
But you don't want neither, you want `afl-clang-fast` anyway :) Note that
|
||||
`afl-clang-lto`, `afl-gcc-fast` and `qemu_mode` are not working on MacOS.
|
||||
|
||||
The crash reporting daemon that comes by default with MacOS X will cause
|
||||
problems with fuzzing. You need to turn it off:
|
||||
|
||||
```
|
||||
launchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist
|
||||
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.ReportCrash.Root.plist
|
||||
@ -104,17 +147,17 @@ and definitely don't look POSIX-compliant. This means two things:
|
||||
- Fuzzing will be probably slower than on Linux. In fact, some folks report
|
||||
considerable performance gains by running the jobs inside a Linux VM on
|
||||
MacOS X.
|
||||
- Some non-portable, platform-specific code may be incompatible with the
|
||||
AFL forkserver. If you run into any problems, set `AFL_NO_FORKSRV=1` in the
|
||||
- Some non-portable, platform-specific code may be incompatible with the AFL++
|
||||
forkserver. If you run into any problems, set `AFL_NO_FORKSRV=1` in the
|
||||
environment before starting afl-fuzz.
|
||||
|
||||
User emulation mode of QEMU does not appear to be supported on MacOS X, so
|
||||
black-box instrumentation mode (`-Q`) will not work.
|
||||
However Frida mode (`-O`) should work on x86 and arm64 MacOS boxes.
|
||||
black-box instrumentation mode (`-Q`) will not work. However, FRIDA mode (`-O`)
|
||||
works on both x86 and arm64 MacOS boxes.
|
||||
|
||||
MacOS X supports SYSV shared memory used by AFL's instrumentation, but the
|
||||
default settings aren't usable with AFL++. The default settings on 10.14 seem
|
||||
to be:
|
||||
default settings aren't usable with AFL++. The default settings on 10.14 seem to
|
||||
be:
|
||||
|
||||
```bash
|
||||
$ ipcs -M
|
||||
@ -135,8 +178,8 @@ sysctl kern.sysv.shmmax=8388608
|
||||
sysctl kern.sysv.shmall=4096
|
||||
```
|
||||
|
||||
If you're running more than one instance of AFL you likely want to make `shmall`
|
||||
bigger and increase `shmseg` as well:
|
||||
If you're running more than one instance of AFL, you likely want to make
|
||||
`shmall` bigger and increase `shmseg` as well:
|
||||
|
||||
```bash
|
||||
sysctl kern.sysv.shmmax=8388608
|
||||
@ -144,91 +187,6 @@ sysctl kern.sysv.shmseg=48
|
||||
sysctl kern.sysv.shmall=98304
|
||||
```
|
||||
|
||||
See http://www.spy-hill.com/help/apple/SharedMemory.html for documentation for
|
||||
these settings and how to make them permanent.
|
||||
|
||||
## 4. Linux or *BSD on non-x86 systems
|
||||
|
||||
Standard build will fail on non-x86 systems, but you should be able to
|
||||
leverage two other options:
|
||||
|
||||
- The LLVM mode (see README.llvm.md), which does not rely on
|
||||
x86-specific assembly shims. It's fast and robust, but requires a
|
||||
complete installation of clang.
|
||||
- The QEMU mode (see qemu_mode/README.md), which can be also used for
|
||||
fuzzing cross-platform binaries. It's slower and more fragile, but
|
||||
can be used even when you don't have the source for the tested app.
|
||||
|
||||
If you're not sure what you need, you need the LLVM mode, which is built by
|
||||
default.
|
||||
|
||||
...and compile your target program with afl-clang-fast or afl-clang-fast++
|
||||
instead of the traditional afl-gcc or afl-clang wrappers.
|
||||
|
||||
## 5. Solaris on x86
|
||||
|
||||
The fuzzer reportedly works on Solaris, but I have not tested this first-hand,
|
||||
and the user base is fairly small, so I don't have a lot of feedback.
|
||||
|
||||
To get the ball rolling, you will need to use GNU make and GCC or clang. I'm
|
||||
being told that the stock version of GCC that comes with the platform does not
|
||||
work properly due to its reliance on a hardcoded location for 'as' (completely
|
||||
ignoring the `-B` parameter or `$PATH`).
|
||||
|
||||
To fix this, you may want to build stock GCC from the source, like so:
|
||||
|
||||
```sh
|
||||
./configure --prefix=$HOME/gcc --with-gnu-as --with-gnu-ld \
|
||||
--with-gmp-include=/usr/include/gmp --with-mpfr-include=/usr/include/mpfr
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
Do *not* specify `--with-as=/usr/gnu/bin/as` - this will produce a GCC binary that
|
||||
ignores the `-B` flag and you will be back to square one.
|
||||
|
||||
Note that Solaris reportedly comes with crash reporting enabled, which causes
|
||||
problems with crashes being misinterpreted as hangs, similarly to the gotchas
|
||||
for Linux and MacOS X. AFL does not auto-detect crash reporting on this
|
||||
particular platform, but you may need to run the following command:
|
||||
|
||||
```sh
|
||||
coreadm -d global -d global-setid -d process -d proc-setid \
|
||||
-d kzone -d log
|
||||
```
|
||||
|
||||
User emulation mode of QEMU is not available on Solaris, so black-box
|
||||
instrumentation mode (`-Q`) will not work.
|
||||
|
||||
## 6. Everything else
|
||||
|
||||
You're on your own. On POSIX-compliant systems, you may be able to compile and
|
||||
run the fuzzer; and the LLVM and GCC plugin modes may offer a way to instrument
|
||||
non-x86 code.
|
||||
|
||||
The fuzzer will run on Windows in WSL only. It will not work under Cygwin on in the normal Windows world. It
|
||||
could be ported to the latter platform fairly easily, but it's a pretty bad
|
||||
idea, because Cygwin is extremely slow. It makes much more sense to use
|
||||
VirtualBox or so to run a hardware-accelerated Linux VM; it will run around
|
||||
20x faster or so. If you have a *really* compelling use case for Cygwin, let
|
||||
me know.
|
||||
|
||||
Although Android on x86 should theoretically work, the stock kernel may have
|
||||
SHM support compiled out, and if so, you may have to address that issue first.
|
||||
It's possible that all you need is this workaround:
|
||||
|
||||
https://github.com/pelya/android-shmem
|
||||
|
||||
Joshua J. Drake notes that the Android linker adds a shim that automatically
|
||||
intercepts `SIGSEGV` and related signals. To fix this issue and be able to see
|
||||
crashes, you need to put this at the beginning of the fuzzed program:
|
||||
|
||||
```sh
|
||||
signal(SIGILL, SIG_DFL);
|
||||
signal(SIGABRT, SIG_DFL);
|
||||
signal(SIGBUS, SIG_DFL);
|
||||
signal(SIGFPE, SIG_DFL);
|
||||
signal(SIGSEGV, SIG_DFL);
|
||||
```
|
||||
|
||||
You may need to `#include <signal.h>` first.
|
||||
See
|
||||
[http://www.spy-hill.com/help/apple/SharedMemory.html](http://www.spy-hill.com/help/apple/SharedMemory.html)
|
||||
for documentation for these settings and how to make them permanent.
|
||||
|
@ -1,50 +0,0 @@
|
||||
# AFL quick start guide
|
||||
|
||||
You should read [README.md](../README.md) - it's pretty short. If you really can't, here's
|
||||
how to hit the ground running:
|
||||
|
||||
1) Compile AFL with 'make'. If build fails, see [INSTALL.md](INSTALL.md) for tips.
|
||||
|
||||
2) Find or write a reasonably fast and simple program that takes data from
|
||||
a file or stdin, processes it in a test-worthy way, then exits cleanly.
|
||||
If testing a network service, modify it to run in the foreground and read
|
||||
from stdin. When fuzzing a format that uses checksums, comment out the
|
||||
checksum verification code, too.
|
||||
|
||||
If this is not possible (e.g. in -Q(emu) mode) then use
|
||||
AFL_CUSTOM_MUTATOR_LIBRARY to calculate the values with your own library.
|
||||
|
||||
The program must crash properly when a fault is encountered. Watch out for
|
||||
custom SIGSEGV or SIGABRT handlers and background processes. For tips on
|
||||
detecting non-crashing flaws, see section 11 in [README.md](README.md) .
|
||||
|
||||
3) Compile the program / library to be fuzzed using afl-cc. A common way to
|
||||
do this would be:
|
||||
|
||||
CC=/path/to/afl-cc CXX=/path/to/afl-c++ ./configure --disable-shared
|
||||
make clean all
|
||||
|
||||
4) Get a small but valid input file that makes sense to the program. When
|
||||
fuzzing verbose syntax (SQL, HTTP, etc), create a dictionary as described in
|
||||
dictionaries/README.md, too.
|
||||
|
||||
5) If the program reads from stdin, run 'afl-fuzz' like so:
|
||||
|
||||
./afl-fuzz -i testcase_dir -o findings_dir -- \
|
||||
/path/to/tested/program [...program's cmdline...]
|
||||
|
||||
If the program takes input from a file, you can put @@ in the program's
|
||||
command line; AFL will put an auto-generated file name in there for you.
|
||||
|
||||
6) Investigate anything shown in red in the fuzzer UI by promptly consulting
|
||||
[status_screen.md](status_screen.md).
|
||||
|
||||
8) There is a basic docker build with 'docker build -t aflplusplus .'
|
||||
|
||||
That's it. Sit back, relax, and - time permitting - try to skim through the
|
||||
following files:
|
||||
|
||||
- README.md - A general introduction to AFL,
|
||||
- docs/perf_tips.md - Simple tips on how to fuzz more quickly,
|
||||
- docs/status_screen.md - An explanation of the tidbits shown in the UI,
|
||||
- docs/parallel_fuzzing.md - Advice on running AFL on multiple cores.
|
65
docs/README.md
Normal file
@ -0,0 +1,65 @@
|
||||
# AFL++ documentation
|
||||
|
||||
This is the overview of the AFL++ docs content.
|
||||
|
||||
For general information on AFL++, see the
|
||||
[README.md of the repository](../README.md).
|
||||
|
||||
Also take a look at our [FAQ.md](FAQ.md) and
|
||||
[best_practices.md](best_practices.md).
|
||||
|
||||
## Fuzzing targets with the source code available
|
||||
|
||||
You can find a quickstart for fuzzing targets with the source code available in
|
||||
the [README.md of the repository](../README.md#quick-start-fuzzing-with-afl).
|
||||
|
||||
For in-depth information on the steps of the fuzzing process, see
|
||||
[fuzzing_in_depth.md](fuzzing_in_depth.md) or click on the following
|
||||
image and select a step.
|
||||
|
||||

|
||||
|
||||
For further information on instrumentation, see the
|
||||
[READMEs in the instrumentation/ folder](../instrumentation/).
|
||||
|
||||
### Instrumenting the target
|
||||
|
||||
For more information, click on the following image and select a step.
|
||||
|
||||

|
||||
|
||||
### Preparing the fuzzing campaign
|
||||
|
||||
For more information, click on the following image and select a step.
|
||||
|
||||

|
||||
|
||||
### Fuzzing the target
|
||||
|
||||
For more information, click on the following image and select a step.
|
||||
|
||||

|
||||
|
||||
### Managing the fuzzing campaign
|
||||
|
||||
For more information, click on the following image and select a step.
|
||||
|
||||

|
||||
|
||||
## Fuzzing other targets
|
||||
|
||||
To learn about fuzzing other targets, see:
|
||||
|
||||
* Binary-only: [fuzzing_binary-only_targets.md](fuzzing_binary-only_targets.md)
|
||||
* GUI programs:
|
||||
[best_practices.md#fuzzing-a-gui-program](best_practices.md#fuzzing-a-gui-program)
|
||||
* Libraries: [frida_mode/README.md](../frida_mode/README.md)
|
||||
* Network services:
|
||||
[best_practices.md#fuzzing-a-network-service](best_practices.md#fuzzing-a-network-service)
|
||||
* Non-linux: [unicorn_mode/README.md](../unicorn_mode/README.md)
|
||||
|
||||
## Additional information
|
||||
|
||||
* Tools that help fuzzing with AFL++:
|
||||
[third_party_tools.md](third_party_tools.md)
|
||||
* Tutorials: [tutorials.md](tutorials.md)
|
543
docs/afl-fuzz_approach.md
Normal file
@ -0,0 +1,543 @@
|
||||
# The afl-fuzz approach
|
||||
|
||||
AFL++ is a brute-force fuzzer coupled with an exceedingly simple but rock-solid
|
||||
instrumentation-guided genetic algorithm. It uses a modified form of edge
|
||||
coverage to effortlessly pick up subtle, local-scale changes to program control
|
||||
flow.
|
||||
|
||||
Simplifying a bit, the overall algorithm can be summed up as:
|
||||
|
||||
1) Load user-supplied initial test cases into the queue.
|
||||
|
||||
2) Take the next input file from the queue.
|
||||
|
||||
3) Attempt to trim the test case to the smallest size that doesn't alter the
|
||||
measured behavior of the program.
|
||||
|
||||
4) Repeatedly mutate the file using a balanced and well-researched variety of
|
||||
traditional fuzzing strategies.
|
||||
|
||||
5) If any of the generated mutations resulted in a new state transition recorded
|
||||
by the instrumentation, add mutated output as a new entry in the queue.
|
||||
|
||||
6) Go to 2.
|
||||
|
||||
The discovered test cases are also periodically culled to eliminate ones that
|
||||
have been obsoleted by newer, higher-coverage finds; and undergo several other
|
||||
instrumentation-driven effort minimization steps.
|
||||
|
||||
As a side result of the fuzzing process, the tool creates a small,
|
||||
self-contained corpus of interesting test cases. These are extremely useful for
|
||||
seeding other, labor- or resource-intensive testing regimes - for example, for
|
||||
stress-testing browsers, office applications, graphics suites, or closed-source
|
||||
tools.
|
||||
|
||||
The fuzzer is thoroughly tested to deliver out-of-the-box performance far
|
||||
superior to blind fuzzing or coverage-only tools.
|
||||
|
||||
## Understanding the status screen
|
||||
|
||||
This section provides an overview of the status screen - plus tips for
|
||||
troubleshooting any warnings and red text shown in the UI.
|
||||
|
||||
For the general instruction manual, see [README.md](README.md).
|
||||
|
||||
### A note about colors
|
||||
|
||||
The status screen and error messages use colors to keep things readable and
|
||||
attract your attention to the most important details. For example, red almost
|
||||
always means "consult this doc" :-)
|
||||
|
||||
Unfortunately, the UI will only render correctly if your terminal is using
|
||||
traditional un*x palette (white text on black background) or something close to
|
||||
that.
|
||||
|
||||
If you are using inverse video, you may want to change your settings, say:
|
||||
|
||||
- For GNOME Terminal, go to `Edit > Profile` preferences, select the "colors"
|
||||
tab, and from the list of built-in schemes, choose "white on black".
|
||||
- For the MacOS X Terminal app, open a new window using the "Pro" scheme via the
|
||||
`Shell > New Window` menu (or make "Pro" your default).
|
||||
|
||||
Alternatively, if you really like your current colors, you can edit config.h to
|
||||
comment out USE_COLORS, then do `make clean all`.
|
||||
|
||||
We are not aware of any other simple way to make this work without causing other
|
||||
side effects - sorry about that.
|
||||
|
||||
With that out of the way, let's talk about what's actually on the screen...
|
||||
|
||||
### The status bar
|
||||
|
||||
```
|
||||
american fuzzy lop ++3.01a (default) [fast] {0}
|
||||
```
|
||||
|
||||
The top line shows you which mode afl-fuzz is running in (normal: "american
|
||||
fuzzy lop", crash exploration mode: "peruvian rabbit mode") and the version of
|
||||
AFL++. Next to the version is the banner, which, if not set with -T by hand,
|
||||
will either show the binary name being fuzzed, or the -M/-S main/secondary name
|
||||
for parallel fuzzing. Second to last is the power schedule mode being run
|
||||
(default: fast). Finally, the last item is the CPU id.
|
||||
|
||||
### Process timing
|
||||
|
||||
```
|
||||
+----------------------------------------------------+
|
||||
| run time : 0 days, 8 hrs, 32 min, 43 sec |
|
||||
| last new find : 0 days, 0 hrs, 6 min, 40 sec |
|
||||
| last uniq crash : none seen yet |
|
||||
| last uniq hang : 0 days, 1 hrs, 24 min, 32 sec |
|
||||
+----------------------------------------------------+
|
||||
```
|
||||
|
||||
This section is fairly self-explanatory: it tells you how long the fuzzer has
|
||||
been running and how much time has elapsed since its most recent finds. This is
|
||||
broken down into "paths" (a shorthand for test cases that trigger new execution
|
||||
patterns), crashes, and hangs.
|
||||
|
||||
When it comes to timing: there is no hard rule, but most fuzzing jobs should be
|
||||
expected to run for days or weeks; in fact, for a moderately complex project,
|
||||
the first pass will probably take a day or so. Every now and then, some jobs
|
||||
will be allowed to run for months.
|
||||
|
||||
There's one important thing to watch out for: if the tool is not finding new
|
||||
paths within several minutes of starting, you're probably not invoking the
|
||||
target binary correctly and it never gets to parse the input files that are
|
||||
thrown at it; other possible explanations are that the default memory limit
|
||||
(`-m`) is too restrictive and the program exits after failing to allocate a
|
||||
buffer very early on; or that the input files are patently invalid and always
|
||||
fail a basic header check.
|
||||
|
||||
If there are no new paths showing up for a while, you will eventually see a big
|
||||
red warning in this section, too :-)
|
||||
|
||||
### Overall results
|
||||
|
||||
```
|
||||
+-----------------------+
|
||||
| cycles done : 0 |
|
||||
| total paths : 2095 |
|
||||
| uniq crashes : 0 |
|
||||
| uniq hangs : 19 |
|
||||
+-----------------------+
|
||||
```
|
||||
|
||||
The first field in this section gives you the count of queue passes done so far
|
||||
- that is, the number of times the fuzzer went over all the interesting test
|
||||
cases discovered so far, fuzzed them, and looped back to the very beginning.
|
||||
Every fuzzing session should be allowed to complete at least one cycle; and
|
||||
ideally, should run much longer than that.
|
||||
|
||||
As noted earlier, the first pass can take a day or longer, so sit back and
|
||||
relax.
|
||||
|
||||
To help make the call on when to hit `Ctrl-C`, the cycle counter is color-coded.
|
||||
It is shown in magenta during the first pass, progresses to yellow if new finds
|
||||
are still being made in subsequent rounds, then blue when that ends - and
|
||||
finally, turns green after the fuzzer hasn't been seeing any action for a longer
|
||||
while.
|
||||
|
||||
The remaining fields in this part of the screen should be pretty obvious:
|
||||
there's the number of test cases ("paths") discovered so far, and the number of
|
||||
unique faults. The test cases, crashes, and hangs can be explored in real-time
|
||||
by browsing the output directory, see
|
||||
[#interpreting-output](#interpreting-output).
|
||||
|
||||
### Cycle progress
|
||||
|
||||
```
|
||||
+-------------------------------------+
|
||||
| now processing : 1296 (61.86%) |
|
||||
| paths timed out : 0 (0.00%) |
|
||||
+-------------------------------------+
|
||||
```
|
||||
|
||||
This box tells you how far along the fuzzer is with the current queue cycle: it
|
||||
shows the ID of the test case it is currently working on, plus the number of
|
||||
inputs it decided to ditch because they were persistently timing out.
|
||||
|
||||
The "*" suffix sometimes shown in the first line means that the currently
|
||||
processed path is not "favored" (a property discussed later on).
|
||||
|
||||
### Map coverage
|
||||
|
||||
```
|
||||
+--------------------------------------+
|
||||
| map density : 10.15% / 29.07% |
|
||||
| count coverage : 4.03 bits/tuple |
|
||||
+--------------------------------------+
|
||||
```
|
||||
|
||||
The section provides some trivia about the coverage observed by the
|
||||
instrumentation embedded in the target binary.
|
||||
|
||||
The first line in the box tells you how many branch tuples already were hit, in
|
||||
proportion to how much the bitmap can hold. The number on the left describes the
|
||||
current input; the one on the right is the value for the entire input corpus.
|
||||
|
||||
Be wary of extremes:
|
||||
|
||||
- Absolute numbers below 200 or so suggest one of three things: that the program
|
||||
is extremely simple; that it is not instrumented properly (e.g., due to being
|
||||
linked against a non-instrumented copy of the target library); or that it is
|
||||
bailing out prematurely on your input test cases. The fuzzer will try to mark
|
||||
this in pink, just to make you aware.
|
||||
- Percentages over 70% may very rarely happen with very complex programs that
|
||||
make heavy use of template-generated code. Because high bitmap density makes
|
||||
it harder for the fuzzer to reliably discern new program states, we recommend
|
||||
recompiling the binary with `AFL_INST_RATIO=10` or so and trying again (see
|
||||
[env_variables.md](env_variables.md)). The fuzzer will flag high percentages
|
||||
in red. Chances are, you will never see that unless you're fuzzing extremely
|
||||
hairy software (say, v8, perl, ffmpeg).
|
||||
|
||||
The other line deals with the variability in tuple hit counts seen in the
|
||||
binary. In essence, if every taken branch is always taken a fixed number of
|
||||
times for all the inputs that were tried, this will read `1.00`. As we manage to
|
||||
trigger other hit counts for every branch, the needle will start to move toward
|
||||
`8.00` (every bit in the 8-bit map hit), but will probably never reach that
|
||||
extreme.
|
||||
|
||||
Together, the values can be useful for comparing the coverage of several
|
||||
different fuzzing jobs that rely on the same instrumented binary.
|
||||
|
||||
### Stage progress
|
||||
|
||||
```
|
||||
+-------------------------------------+
|
||||
| now trying : interest 32/8 |
|
||||
| stage execs : 3996/34.4k (11.62%) |
|
||||
| total execs : 27.4M |
|
||||
| exec speed : 891.7/sec |
|
||||
+-------------------------------------+
|
||||
```
|
||||
|
||||
This part gives you an in-depth peek at what the fuzzer is actually doing right
|
||||
now. It tells you about the current stage, which can be any of:
|
||||
|
||||
- calibration - a pre-fuzzing stage where the execution path is examined to
|
||||
detect anomalies, establish baseline execution speed, and so on. Executed very
|
||||
briefly whenever a new find is being made.
|
||||
- trim L/S - another pre-fuzzing stage where the test case is trimmed to the
|
||||
shortest form that still produces the same execution path. The length (L) and
|
||||
stepover (S) are chosen in general relationship to file size.
|
||||
- bitflip L/S - deterministic bit flips. There are L bits toggled at any given
|
||||
time, walking the input file with S-bit increments. The current L/S variants
|
||||
are: `1/1`, `2/1`, `4/1`, `8/8`, `16/8`, `32/8`.
|
||||
- arith L/8 - deterministic arithmetics. The fuzzer tries to subtract or add
|
||||
small integers to 8-, 16-, and 32-bit values. The stepover is always 8 bits.
|
||||
- interest L/8 - deterministic value overwrite. The fuzzer has a list of known
|
||||
"interesting" 8-, 16-, and 32-bit values to try. The stepover is 8 bits.
|
||||
- extras - deterministic injection of dictionary terms. This can be shown as
|
||||
"user" or "auto", depending on whether the fuzzer is using a user-supplied
|
||||
dictionary (`-x`) or an auto-created one. You will also see "over" or
|
||||
"insert", depending on whether the dictionary words overwrite existing data or
|
||||
are inserted by offsetting the remaining data to accommodate their length.
|
||||
- havoc - a sort-of-fixed-length cycle with stacked random tweaks. The
|
||||
operations attempted during this stage include bit flips, overwrites with
|
||||
random and "interesting" integers, block deletion, block duplication, plus
|
||||
assorted dictionary-related operations (if a dictionary is supplied in the
|
||||
first place).
|
||||
- splice - a last-resort strategy that kicks in after the first full queue cycle
|
||||
with no new paths. It is equivalent to 'havoc', except that it first splices
|
||||
together two random inputs from the queue at some arbitrarily selected
|
||||
midpoint.
|
||||
- sync - a stage used only when `-M` or `-S` is set (see
|
||||
[fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores)).
|
||||
No real fuzzing is involved, but the tool scans the output from other fuzzers
|
||||
and imports test cases as necessary. The first time this is done, it may take
|
||||
several minutes or so.
|
||||
|
||||
The remaining fields should be fairly self-evident: there's the exec count
|
||||
progress indicator for the current stage, a global exec counter, and a benchmark
|
||||
for the current program execution speed. This may fluctuate from one test case
|
||||
to another, but the benchmark should be ideally over 500 execs/sec most of the
|
||||
time - and if it stays below 100, the job will probably take very long.
|
||||
|
||||
The fuzzer will explicitly warn you about slow targets, too. If this happens,
|
||||
see the [best_practices.md#improving-speed](best_practices.md#improving-speed)
|
||||
for ideas on how to speed things up.
|
||||
|
||||
### Findings in depth
|
||||
|
||||
```
|
||||
+--------------------------------------+
|
||||
| favored paths : 879 (41.96%) |
|
||||
| new edges on : 423 (20.19%) |
|
||||
| total crashes : 0 (0 unique) |
|
||||
| total tmouts : 24 (19 unique) |
|
||||
+--------------------------------------+
|
||||
```
|
||||
|
||||
This gives you several metrics that are of interest mostly to complete nerds.
|
||||
The section includes the number of paths that the fuzzer likes the most based on
|
||||
a minimization algorithm baked into the code (these will get considerably more
|
||||
air time), and the number of test cases that actually resulted in better edge
|
||||
coverage (versus just pushing the branch hit counters up). There are also
|
||||
additional, more detailed counters for crashes and timeouts.
|
||||
|
||||
Note that the timeout counter is somewhat different from the hang counter; this
|
||||
one includes all test cases that exceeded the timeout, even if they did not
|
||||
exceed it by a margin sufficient to be classified as hangs.
|
||||
|
||||
### Fuzzing strategy yields
|
||||
|
||||
```
|
||||
+-----------------------------------------------------+
|
||||
| bit flips : 57/289k, 18/289k, 18/288k |
|
||||
| byte flips : 0/36.2k, 4/35.7k, 7/34.6k |
|
||||
| arithmetics : 53/2.54M, 0/537k, 0/55.2k |
|
||||
| known ints : 8/322k, 12/1.32M, 10/1.70M |
|
||||
| dictionary : 9/52k, 1/53k, 1/24k |
|
||||
|havoc/splice : 1903/20.0M, 0/0 |
|
||||
|py/custom/rq : unused, 53/2.54M, unused |
|
||||
| trim/eff : 20.31%/9201, 17.05% |
|
||||
+-----------------------------------------------------+
|
||||
```
|
||||
|
||||
This is just another nerd-targeted section keeping track of how many paths were
|
||||
netted, in proportion to the number of execs attempted, for each of the fuzzing
|
||||
strategies discussed earlier on. This serves to convincingly validate
|
||||
assumptions about the usefulness of the various approaches taken by afl-fuzz.
|
||||
|
||||
The trim strategy stats in this section are a bit different than the rest. The
|
||||
first number in this line shows the ratio of bytes removed from the input files;
|
||||
the second one corresponds to the number of execs needed to achieve this goal.
|
||||
Finally, the third number shows the proportion of bytes that, although not
|
||||
possible to remove, were deemed to have no effect and were excluded from some of
|
||||
the more expensive deterministic fuzzing steps.
|
||||
|
||||
Note that when deterministic mutation mode is off (which is the default because
|
||||
it is not very efficient) the first five lines display "disabled (default,
|
||||
enable with -D)".
|
||||
|
||||
Only what is activated will have counter shown.
|
||||
|
||||
### Path geometry
|
||||
|
||||
```
|
||||
+---------------------+
|
||||
| levels : 5 |
|
||||
| pending : 1570 |
|
||||
| pend fav : 583 |
|
||||
| own finds : 0 |
|
||||
| imported : 0 |
|
||||
| stability : 100.00% |
|
||||
+---------------------+
|
||||
```
|
||||
|
||||
The first field in this section tracks the path depth reached through the guided
|
||||
fuzzing process. In essence: the initial test cases supplied by the user are
|
||||
considered "level 1". The test cases that can be derived from that through
|
||||
traditional fuzzing are considered "level 2"; the ones derived by using these as
|
||||
inputs to subsequent fuzzing rounds are "level 3"; and so forth. The maximum
|
||||
depth is therefore a rough proxy for how much value you're getting out of the
|
||||
instrumentation-guided approach taken by afl-fuzz.
|
||||
|
||||
The next field shows you the number of inputs that have not gone through any
|
||||
fuzzing yet. The same stat is also given for "favored" entries that the fuzzer
|
||||
really wants to get to in this queue cycle (the non-favored entries may have to
|
||||
wait a couple of cycles to get their chance).
|
||||
|
||||
Next is the number of new paths found during this fuzzing section and imported
|
||||
from other fuzzer instances when doing parallelized fuzzing; and the extent to
|
||||
which identical inputs appear to sometimes produce variable behavior in the
|
||||
tested binary.
|
||||
|
||||
That last bit is actually fairly interesting: it measures the consistency of
|
||||
observed traces. If a program always behaves the same for the same input data,
|
||||
it will earn a score of 100%. When the value is lower but still shown in purple,
|
||||
the fuzzing process is unlikely to be negatively affected. If it goes into red,
|
||||
you may be in trouble, since AFL++ will have difficulty discerning between
|
||||
meaningful and "phantom" effects of tweaking the input file.
|
||||
|
||||
Now, most targets will just get a 100% score, but when you see lower figures,
|
||||
there are several things to look at:
|
||||
|
||||
- The use of uninitialized memory in conjunction with some intrinsic sources of
|
||||
entropy in the tested binary. Harmless to AFL, but could be indicative of a
|
||||
security bug.
|
||||
- Attempts to manipulate persistent resources, such as left over temporary files
|
||||
or shared memory objects. This is usually harmless, but you may want to
|
||||
double-check to make sure the program isn't bailing out prematurely. Running
|
||||
out of disk space, SHM handles, or other global resources can trigger this,
|
||||
too.
|
||||
- Hitting some functionality that is actually designed to behave randomly.
|
||||
Generally harmless. For example, when fuzzing sqlite, an input like `select
|
||||
random();` will trigger a variable execution path.
|
||||
- Multiple threads executing at once in semi-random order. This is harmless when
|
||||
the 'stability' metric stays over 90% or so, but can become an issue if not.
|
||||
Here's what to try:
|
||||
* Use afl-clang-fast from [instrumentation](../instrumentation/) - it uses a
|
||||
thread-local tracking model that is less prone to concurrency issues,
|
||||
* See if the target can be compiled or run without threads. Common
|
||||
`./configure` options include `--without-threads`, `--disable-pthreads`, or
|
||||
`--disable-openmp`.
|
||||
* Replace pthreads with GNU Pth (https://www.gnu.org/software/pth/), which
|
||||
allows you to use a deterministic scheduler.
|
||||
- In persistent mode, minor drops in the "stability" metric can be normal,
|
||||
because not all the code behaves identically when re-entered; but major dips
|
||||
may signify that the code within `__AFL_LOOP()` is not behaving correctly on
|
||||
subsequent iterations (e.g., due to incomplete clean-up or reinitialization of
|
||||
the state) and that most of the fuzzing effort goes to waste.
|
||||
|
||||
The paths where variable behavior is detected are marked with a matching entry
|
||||
in the `<out_dir>/queue/.state/variable_behavior/` directory, so you can look
|
||||
them up easily.
|
||||
|
||||
### CPU load
|
||||
|
||||
```
|
||||
[cpu: 25%]
|
||||
```
|
||||
|
||||
This tiny widget shows the apparent CPU utilization on the local system. It is
|
||||
calculated by taking the number of processes in the "runnable" state, and then
|
||||
comparing it to the number of logical cores on the system.
|
||||
|
||||
If the value is shown in green, you are using fewer CPU cores than available on
|
||||
your system and can probably parallelize to improve performance; for tips on how
|
||||
to do that, see
|
||||
[fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores).
|
||||
|
||||
If the value is shown in red, your CPU is *possibly* oversubscribed, and running
|
||||
additional fuzzers may not give you any benefits.
|
||||
|
||||
Of course, this benchmark is very simplistic; it tells you how many processes
|
||||
are ready to run, but not how resource-hungry they may be. It also doesn't
|
||||
distinguish between physical cores, logical cores, and virtualized CPUs; the
|
||||
performance characteristics of each of these will differ quite a bit.
|
||||
|
||||
If you want a more accurate measurement, you can run the `afl-gotcpu` utility
|
||||
from the command line.
|
||||
|
||||
## Interpreting output
|
||||
|
||||
See [#understanding-the-status-screen](#understanding-the-status-screen) for
|
||||
information on how to interpret the displayed stats and monitor the health of
|
||||
the process. Be sure to consult this file especially if any UI elements are
|
||||
highlighted in red.
|
||||
|
||||
The fuzzing process will continue until you press Ctrl-C. At a minimum, you want
|
||||
to allow the fuzzer to complete one queue cycle, which may take anywhere from a
|
||||
couple of hours to a week or so.
|
||||
|
||||
There are three subdirectories created within the output directory and updated
|
||||
in real-time:
|
||||
|
||||
- queue/ - test cases for every distinctive execution path, plus all the
|
||||
starting files given by the user. This is the synthesized corpus.
|
||||
|
||||
Before using this corpus for any other purposes, you can shrink
|
||||
it to a smaller size using the afl-cmin tool. The tool will find
|
||||
a smaller subset of files offering equivalent edge coverage.
|
||||
|
||||
- crashes/ - unique test cases that cause the tested program to receive a fatal
|
||||
signal (e.g., SIGSEGV, SIGILL, SIGABRT). The entries are grouped by
|
||||
the received signal.
|
||||
|
||||
- hangs/ - unique test cases that cause the tested program to time out. The
|
||||
default time limit before something is classified as a hang is the
|
||||
larger of 1 second and the value of the -t parameter. The value can
|
||||
be fine-tuned by setting AFL_HANG_TMOUT, but this is rarely
|
||||
necessary.
|
||||
|
||||
Crashes and hangs are considered "unique" if the associated execution paths
|
||||
involve any state transitions not seen in previously-recorded faults. If a
|
||||
single bug can be reached in multiple ways, there will be some count inflation
|
||||
early in the process, but this should quickly taper off.
|
||||
|
||||
The file names for crashes and hangs are correlated with the parent,
|
||||
non-faulting queue entries. This should help with debugging.
|
||||
|
||||
## Visualizing
|
||||
|
||||
If you have gnuplot installed, you can also generate some pretty graphs for any
|
||||
active fuzzing task using afl-plot. For an example of how this looks like, see
|
||||
[https://lcamtuf.coredump.cx/afl/plot/](https://lcamtuf.coredump.cx/afl/plot/).
|
||||
|
||||
You can also manually build and install afl-plot-ui, which is a helper utility
|
||||
for showing the graphs generated by afl-plot in a graphical window using GTK.
|
||||
You can build and install it as follows:
|
||||
|
||||
```shell
|
||||
sudo apt install libgtk-3-0 libgtk-3-dev pkg-config
|
||||
cd utils/plot_ui
|
||||
make
|
||||
cd ../../
|
||||
sudo make install
|
||||
```
|
||||
|
||||
To learn more about remote monitoring and metrics visualization with StatsD, see
|
||||
[rpc_statsd.md](rpc_statsd.md).
|
||||
|
||||
### Addendum: status and plot files
|
||||
|
||||
For unattended operation, some of the key status screen information can be also
|
||||
found in a machine-readable format in the fuzzer_stats file in the output
|
||||
directory. This includes:
|
||||
|
||||
- `start_time` - unix time indicating the start time of afl-fuzz
|
||||
- `last_update` - unix time corresponding to the last update of this file
|
||||
- `run_time` - run time in seconds to the last update of this file
|
||||
- `fuzzer_pid` - PID of the fuzzer process
|
||||
- `cycles_done` - queue cycles completed so far
|
||||
- `cycles_wo_finds` - number of cycles without any new paths found
|
||||
- `execs_done` - number of execve() calls attempted
|
||||
- `execs_per_sec` - overall number of execs per second
|
||||
- `corpus_count` - total number of entries in the queue
|
||||
- `corpus_favored` - number of queue entries that are favored
|
||||
- `corpus_found` - number of entries discovered through local fuzzing
|
||||
- `corpus_imported` - number of entries imported from other instances
|
||||
- `max_depth` - number of levels in the generated data set
|
||||
- `cur_item` - currently processed entry number
|
||||
- `pending_favs` - number of favored entries still waiting to be fuzzed
|
||||
- `pending_total` - number of all entries waiting to be fuzzed
|
||||
- `corpus_variable` - number of test cases showing variable behavior
|
||||
- `stability` - percentage of bitmap bytes that behave consistently
|
||||
- `bitmap_cvg` - percentage of edge coverage found in the map so far
|
||||
- `saved_crashes` - number of unique crashes recorded
|
||||
- `saved_hangs` - number of unique hangs encountered
|
||||
- `last_find` - seconds since the last find was found
|
||||
- `last_crash` - seconds since the last crash was found
|
||||
- `last_hang` - seconds since the last hang was found
|
||||
- `execs_since_crash` - execs since the last crash was found
|
||||
- `exec_timeout` - the -t command line value
|
||||
- `slowest_exec_ms` - real time of the slowest execution in ms
|
||||
- `peak_rss_mb` - max rss usage reached during fuzzing in MB
|
||||
- `edges_found` - how many edges have been found
|
||||
- `var_byte_count` - how many edges are non-deterministic
|
||||
- `afl_banner` - banner text (e.g., the target name)
|
||||
- `afl_version` - the version of AFL++ used
|
||||
- `target_mode` - default, persistent, qemu, unicorn, non-instrumented
|
||||
- `command_line` - full command line used for the fuzzing session
|
||||
|
||||
Most of these map directly to the UI elements discussed earlier on.
|
||||
|
||||
On top of that, you can also find an entry called `plot_data`, containing a
|
||||
plottable history for most of these fields. If you have gnuplot installed, you
|
||||
can turn this into a nice progress report with the included `afl-plot` tool.
|
||||
|
||||
### Addendum: automatically sending metrics with StatsD
|
||||
|
||||
In a CI environment or when running multiple fuzzers, it can be tedious to log
|
||||
into each of them or deploy scripts to read the fuzzer statistics. Using
|
||||
`AFL_STATSD` (and the other related environment variables `AFL_STATSD_HOST`,
|
||||
`AFL_STATSD_PORT`, `AFL_STATSD_TAGS_FLAVOR`) you can automatically send metrics
|
||||
to your favorite StatsD server. Depending on your StatsD server, you will be
|
||||
able to monitor, trigger alerts, or perform actions based on these metrics
|
||||
(e.g.: alert on slow exec/s for a new build, threshold of crashes, time since
|
||||
last crash > X, etc.).
|
||||
|
||||
The selected metrics are a subset of all the metrics found in the status and in
|
||||
the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
|
||||
`execs_done`,`execs_per_sec`, `corpus_count`, `corpus_favored`, `corpus_found`,
|
||||
`corpus_imported`, `max_depth`, `cur_item`, `pending_favs`, `pending_total`,
|
||||
`corpus_variable`, `saved_crashes`, `saved_hangs`, `total_crashes`,
|
||||
`slowest_exec_ms`, `edges_found`, `var_byte_count`, `havoc_expansion`. Their
|
||||
definitions can be found in the addendum above.
|
||||
|
||||
When using multiple fuzzer instances with StatsD, it is *strongly* recommended
|
||||
to setup the flavor (`AFL_STATSD_TAGS_FLAVOR`) to match your StatsD server. This
|
||||
will allow you to see individual fuzzer performance, detect bad ones, see the
|
||||
progress of each strategy...
|
192
docs/best_practices.md
Normal file
@ -0,0 +1,192 @@
|
||||
# Best practices
|
||||
|
||||
## Contents
|
||||
|
||||
### Targets
|
||||
|
||||
* [Fuzzing a target with source code available](#fuzzing-a-target-with-source-code-available)
|
||||
* [Fuzzing a target with dlopen() instrumented libraries](#fuzzing-a-target-with-dlopen-instrumented-libraries)
|
||||
* [Fuzzing a binary-only target](#fuzzing-a-binary-only-target)
|
||||
* [Fuzzing a GUI program](#fuzzing-a-gui-program)
|
||||
* [Fuzzing a network service](#fuzzing-a-network-service)
|
||||
|
||||
### Improvements
|
||||
|
||||
* [Improving speed](#improving-speed)
|
||||
* [Improving stability](#improving-stability)
|
||||
|
||||
## Targets
|
||||
|
||||
### Fuzzing a target with source code available
|
||||
|
||||
To learn how to fuzz a target if source code is available, see
|
||||
[fuzzing_in_depth.md](fuzzing_in_depth.md).
|
||||
|
||||
### Fuzzing a target with dlopen instrumented libraries
|
||||
|
||||
If a source code based fuzzing target loads instrumented libraries with
|
||||
dlopen() after the forkserver has been activated and non-colliding coverage
|
||||
instrumentation is used (PCGUARD (which is the default), or LTO), then this
|
||||
an issue, because this would enlarge the coverage map, but afl-fuzz doesn't
|
||||
know about it.
|
||||
|
||||
The solution is to use `AFL_PRELOAD` for all dlopen()'ed libraries to
|
||||
ensure that all coverage targets are present on startup in the target,
|
||||
even if accessed only later with dlopen().
|
||||
|
||||
For PCGUARD instrumentation `abort()` is called if this is detected, for LTO
|
||||
there will either be no coverage for the instrumented dlopen()'ed libraries or
|
||||
you will see lots of crashes in the UI.
|
||||
|
||||
Note that this is not an issue if you use the inferiour `afl-gcc-fast`,
|
||||
`afl-gcc` or`AFL_LLVM_INSTRUMENT=CLASSIC/NGRAM/CTX afl-clang-fast`
|
||||
instrumentation.
|
||||
|
||||
### Fuzzing a binary-only target
|
||||
|
||||
For a comprehensive guide, see
|
||||
[fuzzing_binary-only_targets.md](fuzzing_binary-only_targets.md).
|
||||
|
||||
### Fuzzing a GUI program
|
||||
|
||||
If the GUI program can read the fuzz data from a file (via the command line, a
|
||||
fixed location or via an environment variable) without needing any user
|
||||
interaction, then it would be suitable for fuzzing.
|
||||
|
||||
Otherwise, it is not possible without modifying the source code - which is a
|
||||
very good idea anyway as the GUI functionality is a huge CPU/time overhead for
|
||||
the fuzzing.
|
||||
|
||||
So create a new `main()` that just reads the test case and calls the
|
||||
functionality for processing the input that the GUI program is using.
|
||||
|
||||
### Fuzzing a network service
|
||||
|
||||
Fuzzing a network service does not work "out of the box".
|
||||
|
||||
Using a network channel is inadequate for several reasons:
|
||||
- it has a slow-down of x10-20 on the fuzzing speed
|
||||
- it does not scale to fuzzing multiple instances easily,
|
||||
- instead of one initial data packet often a back-and-forth interplay of packets
|
||||
is needed for stateful protocols (which is totally unsupported by most
|
||||
coverage aware fuzzers).
|
||||
|
||||
The established method to fuzz network services is to modify the source code to
|
||||
read from a file or stdin (fd 0) (or even faster via shared memory, combine this
|
||||
with persistent mode
|
||||
[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
|
||||
and you have a performance gain of x10 instead of a performance loss of over x10
|
||||
- that is a x100 difference!).
|
||||
|
||||
If modifying the source is not an option (e.g., because you only have a binary
|
||||
and perform binary fuzzing) you can also use a shared library with AFL_PRELOAD
|
||||
to emulate the network. This is also much faster than the real network would be.
|
||||
See [utils/socket_fuzzing/](../utils/socket_fuzzing/).
|
||||
|
||||
There is an outdated AFL++ branch that implements networking if you are
|
||||
desperate though:
|
||||
[https://github.com/AFLplusplus/AFLplusplus/tree/networking](https://github.com/AFLplusplus/AFLplusplus/tree/networking)
|
||||
- however, a better option is AFLnet
|
||||
([https://github.com/aflnet/aflnet](https://github.com/aflnet/aflnet)) which
|
||||
allows you to define network state with different type of data packets.
|
||||
|
||||
## Improvements
|
||||
|
||||
### Improving speed
|
||||
|
||||
1. Use [llvm_mode](../instrumentation/README.llvm.md): afl-clang-lto (llvm >=
|
||||
11) or afl-clang-fast (llvm >= 9 recommended).
|
||||
2. Use [persistent mode](../instrumentation/README.persistent_mode.md) (x2-x20
|
||||
speed increase).
|
||||
3. Instrument just what you are interested in, see
|
||||
[instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
|
||||
4. If you do not use shmem persistent mode, use `AFL_TMPDIR` to put the input
|
||||
file directory on a tempfs location, see
|
||||
[env_variables.md](env_variables.md).
|
||||
5. Improve Linux kernel performance: modify `/etc/default/grub`, set
|
||||
`GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off
|
||||
mitigations=off no_stf_barrier noibpb noibrs nopcid nopti
|
||||
nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off
|
||||
spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then
|
||||
`update-grub` and `reboot` (warning: makes the system less secure).
|
||||
6. Running on an `ext2` filesystem with `noatime` mount option will be a bit
|
||||
faster than on any other journaling filesystem.
|
||||
7. Use your cores
|
||||
([fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores))!
|
||||
|
||||
### Improving stability
|
||||
|
||||
For fuzzing, a 100% stable target that covers all edges is the best case. A 90%
|
||||
stable target that covers all edges is, however, better than a 100% stable
|
||||
target that ignores 10% of the edges.
|
||||
|
||||
With instability, you basically have a partial coverage loss on an edge, with
|
||||
ignored functions you have a full loss on that edges.
|
||||
|
||||
There are functions that are unstable, but also provide value to coverage, e.g.,
|
||||
init functions that use fuzz data as input. If, however, a function that has
|
||||
nothing to do with the input data is the source of instability, e.g., checking
|
||||
jitter, or is a hash map function etc., then it should not be instrumented.
|
||||
|
||||
To be able to exclude these functions (based on AFL++'s measured stability), the
|
||||
following process will allow to identify functions with variable edges.
|
||||
|
||||
Four steps are required to do this and it also requires quite some knowledge of
|
||||
coding and/or disassembly and is effectively possible only with `afl-clang-fast`
|
||||
`PCGUARD` and `afl-clang-lto` `LTO` instrumentation.
|
||||
|
||||
1. Instrument to be able to find the responsible function(s):
|
||||
|
||||
a) For LTO instrumented binaries, this can be documented during compile
|
||||
time, just set `export AFL_LLVM_DOCUMENT_IDS=/path/to/a/file`. This file
|
||||
will have one assigned edge ID and the corresponding function per line.
|
||||
|
||||
b) For PCGUARD instrumented binaries, it is much more difficult. Here you
|
||||
can either modify the `__sanitizer_cov_trace_pc_guard` function in
|
||||
`instrumentation/afl-llvm-rt.o.c` to write a backtrace to a file if the
|
||||
ID in `__afl_area_ptr[*guard]` is one of the unstable edge IDs. (Example
|
||||
code is already there). Then recompile and reinstall `llvm_mode` and
|
||||
rebuild your target. Run the recompiled target with `afl-fuzz` for a
|
||||
while and then check the file that you wrote with the backtrace
|
||||
information. Alternatively, you can use `gdb` to hook
|
||||
`__sanitizer_cov_trace_pc_guard_init` on start, check to which memory
|
||||
address the edge ID value is written, and set a write breakpoint to that
|
||||
address (`watch 0x.....`).
|
||||
|
||||
c) In other instrumentation types, this is not possible. So just recompile
|
||||
with the two mentioned above. This is just for identifying the functions
|
||||
that have unstable edges.
|
||||
|
||||
2. Identify which edge ID numbers are unstable.
|
||||
|
||||
Run the target with `export AFL_DEBUG=1` for a few minutes then terminate.
|
||||
The out/fuzzer_stats file will then show the edge IDs that were identified
|
||||
as unstable in the `var_bytes` entry. You can match these numbers directly
|
||||
to the data you created in the first step. Now you know which functions are
|
||||
responsible for the instability
|
||||
|
||||
3. Create a text file with the filenames/functions
|
||||
|
||||
Identify which source code files contain the functions that you need to
|
||||
remove from instrumentation, or just specify the functions you want to skip
|
||||
for instrumentation. Note that optimization might inline functions!
|
||||
|
||||
Follow this document on how to do this:
|
||||
[instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
|
||||
|
||||
If `PCGUARD` is used, then you need to follow this guide (needs llvm 12+!):
|
||||
[https://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation](https://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation)
|
||||
|
||||
Only exclude those functions from instrumentation that provide no value for
|
||||
coverage - that is if it does not process any fuzz data directly or
|
||||
indirectly (e.g., hash maps, thread management etc.). If, however, a
|
||||
function directly or indirectly handles fuzz data, then you should not put
|
||||
the function in a deny instrumentation list and rather live with the
|
||||
instability it comes with.
|
||||
|
||||
4. Recompile the target
|
||||
|
||||
Recompile, fuzz it, be happy :)
|
||||
|
||||
This link explains this process for
|
||||
[Fuzzbench](https://github.com/google/fuzzbench/issues/677).
|
@ -1,223 +0,0 @@
|
||||
# Fuzzing binary-only programs with afl++
|
||||
|
||||
afl++, libfuzzer and others are great if you have the source code, and
|
||||
it allows for very fast and coverage guided fuzzing.
|
||||
|
||||
However, if there is only the binary program and no source code available,
|
||||
then standard `afl-fuzz -n` (non-instrumented mode) is not effective.
|
||||
|
||||
The following is a description of how these binaries can be fuzzed with afl++.
|
||||
|
||||
|
||||
## TL;DR:
|
||||
|
||||
qemu_mode in persistent mode is the fastest - if the stability is
|
||||
high enough. Otherwise try retrowrite, afl-dyninst and if these
|
||||
fail too then try standard qemu_mode with AFL_ENTRYPOINT to where you need it.
|
||||
|
||||
If your target is a library use utils/afl_frida/.
|
||||
|
||||
If your target is non-linux then use unicorn_mode/.
|
||||
|
||||
|
||||
## QEMU
|
||||
|
||||
Qemu is the "native" solution to the program.
|
||||
It is available in the ./qemu_mode/ directory and once compiled it can
|
||||
be accessed by the afl-fuzz -Q command line option.
|
||||
It is the easiest to use alternative and even works for cross-platform binaries.
|
||||
|
||||
The speed decrease is at about 50%.
|
||||
However various options exist to increase the speed:
|
||||
- using AFL_ENTRYPOINT to move the forkserver entry to a later basic block in
|
||||
the binary (+5-10% speed)
|
||||
- using persistent mode [qemu_mode/README.persistent.md](../qemu_mode/README.persistent.md)
|
||||
this will result in 150-300% overall speed increase - so 3-8x the original
|
||||
qemu_mode speed!
|
||||
- using AFL_CODE_START/AFL_CODE_END to only instrument specific parts
|
||||
|
||||
Note that there is also honggfuzz: [https://github.com/google/honggfuzz](https://github.com/google/honggfuzz)
|
||||
which now has a qemu_mode, but its performance is just 1.5% ...
|
||||
|
||||
As it is included in afl++ this needs no URL.
|
||||
|
||||
If you like to code a customized fuzzer without much work, we highly
|
||||
recommend to check out our sister project libafl which will support QEMU
|
||||
too:
|
||||
[https://github.com/AFLplusplus/LibAFL](https://github.com/AFLplusplus/LibAFL)
|
||||
|
||||
|
||||
## AFL FRIDA
|
||||
|
||||
In frida_mode you can fuzz binary-only targets easily like with QEMU,
|
||||
with the advantage that frida_mode also works on MacOS (both intel and M1).
|
||||
|
||||
If you want to fuzz a binary-only library then you can fuzz it with
|
||||
frida-gum via utils/afl_frida/, you will have to write a harness to
|
||||
call the target function in the library, use afl-frida.c as a template.
|
||||
|
||||
Both come with afl++ so this needs no URL.
|
||||
|
||||
You can also perform remote fuzzing with frida, e.g. if you want to fuzz
|
||||
on iPhone or Android devices, for this you can use
|
||||
[https://github.com/ttdennis/fpicker/](https://github.com/ttdennis/fpicker/)
|
||||
as an intermediate that uses afl++ for fuzzing.
|
||||
|
||||
If you like to code a customized fuzzer without much work, we highly
|
||||
recommend to check out our sister project libafl which supports Frida too:
|
||||
[https://github.com/AFLplusplus/LibAFL](https://github.com/AFLplusplus/LibAFL)
|
||||
Working examples already exist :-)
|
||||
|
||||
|
||||
## WINE+QEMU
|
||||
|
||||
Wine mode can run Win32 PE binaries with the QEMU instrumentation.
|
||||
It needs Wine, python3 and the pefile python package installed.
|
||||
|
||||
As it is included in afl++ this needs no URL.
|
||||
|
||||
|
||||
## UNICORN
|
||||
|
||||
Unicorn is a fork of QEMU. The instrumentation is, therefore, very similar.
|
||||
In contrast to QEMU, Unicorn does not offer a full system or even userland
|
||||
emulation. Runtime environment and/or loaders have to be written from scratch,
|
||||
if needed. On top, block chaining has been removed. This means the speed boost
|
||||
introduced in the patched QEMU Mode of afl++ cannot simply be ported over to
|
||||
Unicorn. For further information, check out [unicorn_mode/README.md](../unicorn_mode/README.md).
|
||||
|
||||
As it is included in afl++ this needs no URL.
|
||||
|
||||
|
||||
## AFL UNTRACER
|
||||
|
||||
If you want to fuzz a binary-only shared library then you can fuzz it with
|
||||
utils/afl_untracer/, use afl-untracer.c as a template.
|
||||
It is slower than AFL FRIDA (see above).
|
||||
|
||||
|
||||
## DYNINST
|
||||
|
||||
Dyninst is a binary instrumentation framework similar to Pintool and
|
||||
Dynamorio (see far below). However whereas Pintool and Dynamorio work at
|
||||
runtime, dyninst instruments the target at load time, and then let it run -
|
||||
or save the binary with the changes.
|
||||
This is great for some things, e.g. fuzzing, and not so effective for others,
|
||||
e.g. malware analysis.
|
||||
|
||||
So what we can do with dyninst is taking every basic block, and put afl's
|
||||
instrumention code in there - and then save the binary.
|
||||
Afterwards we can just fuzz the newly saved target binary with afl-fuzz.
|
||||
Sounds great? It is. The issue though - it is a non-trivial problem to
|
||||
insert instructions, which change addresses in the process space, so that
|
||||
everything is still working afterwards. Hence more often than not binaries
|
||||
crash when they are run.
|
||||
|
||||
The speed decrease is about 15-35%, depending on the optimization options
|
||||
used with afl-dyninst.
|
||||
|
||||
So if Dyninst works, it is the best option available. Otherwise it just
|
||||
doesn't work well.
|
||||
|
||||
[https://github.com/vanhauser-thc/afl-dyninst](https://github.com/vanhauser-thc/afl-dyninst)
|
||||
|
||||
|
||||
## RETROWRITE, ZAFL, ... other binary rewriter
|
||||
|
||||
If you have an x86/x86_64 binary that still has its symbols, is compiled
|
||||
with position independant code (PIC/PIE) and does not use most of the C++
|
||||
features then the retrowrite solution might be for you.
|
||||
It decompiles to ASM files which can then be instrumented with afl-gcc.
|
||||
|
||||
It is at about 80-85% performance.
|
||||
|
||||
[https://git.zephyr-software.com/opensrc/zafl](https://git.zephyr-software.com/opensrc/zafl)
|
||||
[https://github.com/HexHive/retrowrite](https://github.com/HexHive/retrowrite)
|
||||
|
||||
|
||||
## MCSEMA
|
||||
|
||||
Theoretically you can also decompile to llvm IR with mcsema, and then
|
||||
use llvm_mode to instrument the binary.
|
||||
Good luck with that.
|
||||
|
||||
[https://github.com/lifting-bits/mcsema](https://github.com/lifting-bits/mcsema)
|
||||
|
||||
|
||||
## INTEL-PT
|
||||
|
||||
If you have a newer Intel CPU, you can make use of Intels processor trace.
|
||||
The big issue with Intel's PT is the small buffer size and the complex
|
||||
encoding of the debug information collected through PT.
|
||||
This makes the decoding very CPU intensive and hence slow.
|
||||
As a result, the overall speed decrease is about 70-90% (depending on
|
||||
the implementation and other factors).
|
||||
|
||||
There are two afl intel-pt implementations:
|
||||
|
||||
1. [https://github.com/junxzm1990/afl-pt](https://github.com/junxzm1990/afl-pt)
|
||||
=> this needs Ubuntu 14.04.05 without any updates and the 4.4 kernel.
|
||||
|
||||
2. [https://github.com/hunter-ht-2018/ptfuzzer](https://github.com/hunter-ht-2018/ptfuzzer)
|
||||
=> this needs a 4.14 or 4.15 kernel. the "nopti" kernel boot option must
|
||||
be used. This one is faster than the other.
|
||||
|
||||
Note that there is also honggfuzz: https://github.com/google/honggfuzz
|
||||
But its IPT performance is just 6%!
|
||||
|
||||
|
||||
## CORESIGHT
|
||||
|
||||
Coresight is ARM's answer to Intel's PT.
|
||||
There is no implementation so far which handles coresight and getting
|
||||
it working on an ARM Linux is very difficult due to custom kernel building
|
||||
on embedded systems is difficult. And finding one that has coresight in
|
||||
the ARM chip is difficult too.
|
||||
My guess is that it is slower than Qemu, but faster than Intel PT.
|
||||
|
||||
If anyone finds any coresight implementation for afl please ping me: vh@thc.org
|
||||
|
||||
|
||||
## PIN & DYNAMORIO
|
||||
|
||||
Pintool and Dynamorio are dynamic instrumentation engines, and they can be
|
||||
used for getting basic block information at runtime.
|
||||
Pintool is only available for Intel x32/x64 on Linux, Mac OS and Windows,
|
||||
whereas Dynamorio is additionally available for ARM and AARCH64.
|
||||
Dynamorio is also 10x faster than Pintool.
|
||||
|
||||
The big issue with Dynamorio (and therefore Pintool too) is speed.
|
||||
Dynamorio has a speed decrease of 98-99%
|
||||
Pintool has a speed decrease of 99.5%
|
||||
|
||||
Hence Dynamorio is the option to go for if everything else fails, and Pintool
|
||||
only if Dynamorio fails too.
|
||||
|
||||
Dynamorio solutions:
|
||||
* [https://github.com/vanhauser-thc/afl-dynamorio](https://github.com/vanhauser-thc/afl-dynamorio)
|
||||
* [https://github.com/mxmssh/drAFL](https://github.com/mxmssh/drAFL)
|
||||
* [https://github.com/googleprojectzero/winafl/](https://github.com/googleprojectzero/winafl/) <= very good but windows only
|
||||
|
||||
Pintool solutions:
|
||||
* [https://github.com/vanhauser-thc/afl-pin](https://github.com/vanhauser-thc/afl-pin)
|
||||
* [https://github.com/mothran/aflpin](https://github.com/mothran/aflpin)
|
||||
* [https://github.com/spinpx/afl_pin_mode](https://github.com/spinpx/afl_pin_mode) <= only old Pintool version supported
|
||||
|
||||
|
||||
## Non-AFL solutions
|
||||
|
||||
There are many binary-only fuzzing frameworks.
|
||||
Some are great for CTFs but don't work with large binaries, others are very
|
||||
slow but have good path discovery, some are very hard to set-up ...
|
||||
|
||||
* QSYM: [https://github.com/sslab-gatech/qsym](https://github.com/sslab-gatech/qsym)
|
||||
* Manticore: [https://github.com/trailofbits/manticore](https://github.com/trailofbits/manticore)
|
||||
* S2E: [https://github.com/S2E](https://github.com/S2E)
|
||||
* Tinyinst: [https://github.com/googleprojectzero/TinyInst](https://github.com/googleprojectzero/TinyInst) (Mac/Windows only)
|
||||
* Jackalope: [https://github.com/googleprojectzero/Jackalope](https://github.com/googleprojectzero/Jackalope)
|
||||
* ... please send me any missing that are good
|
||||
|
||||
|
||||
## Closing words
|
||||
|
||||
That's it! News, corrections, updates? Send an email to vh@thc.org
|
@ -1,16 +1,16 @@
|
||||
# Custom Mutators in AFL++
|
||||
|
||||
This file describes how you can implement custom mutations to be used in AFL.
|
||||
For now, we support C/C++ library and Python module, collectivelly named as the
|
||||
For now, we support C/C++ library and Python module, collectively named as the
|
||||
custom mutator.
|
||||
|
||||
There is also experimental support for Rust in `custom_mutators/rust`.
|
||||
Please refer to that directory for documentation.
|
||||
Run ```cargo doc -p custom_mutator --open``` in that directory to view the
|
||||
documentation in your web browser.
|
||||
There is also experimental support for Rust in `custom_mutators/rust`. For
|
||||
documentation, refer to that directory. Run `cargo doc -p custom_mutator --open`
|
||||
in that directory to view the documentation in your web browser.
|
||||
|
||||
Implemented by
|
||||
- C/C++ library (`*.so`): Khaled Yakdan from Code Intelligence (<yakdan@code-intelligence.de>)
|
||||
- C/C++ library (`*.so`): Khaled Yakdan from Code Intelligence
|
||||
(<yakdan@code-intelligence.de>)
|
||||
- Python module: Christian Holler from Mozilla (<choller@mozilla.com>)
|
||||
|
||||
## 1) Introduction
|
||||
@ -21,20 +21,30 @@ fuzzing by using libraries that perform mutations according to a given grammar.
|
||||
|
||||
The custom mutator is passed to `afl-fuzz` via the `AFL_CUSTOM_MUTATOR_LIBRARY`
|
||||
or `AFL_PYTHON_MODULE` environment variable, and must export a fuzz function.
|
||||
Now afl also supports multiple custom mutators which can be specified in the same `AFL_CUSTOM_MUTATOR_LIBRARY` environment variable like this.
|
||||
Now AFL++ also supports multiple custom mutators which can be specified in the
|
||||
same `AFL_CUSTOM_MUTATOR_LIBRARY` environment variable like this.
|
||||
|
||||
```bash
|
||||
export AFL_CUSTOM_MUTATOR_LIBRARY="full/path/to/mutator_first.so;full/path/to/mutator_second.so"
|
||||
```
|
||||
Please see [APIs](#2-apis) and [Usage](#3-usage) for detail.
|
||||
|
||||
The custom mutation stage is set to be the first non-deterministic stage (right before the havoc stage).
|
||||
For details, see [APIs](#2-apis) and [Usage](#3-usage).
|
||||
|
||||
The custom mutation stage is set to be the first non-deterministic stage (right
|
||||
before the havoc stage).
|
||||
|
||||
Note: If `AFL_CUSTOM_MUTATOR_ONLY` is set, all mutations will solely be
|
||||
performed with the custom mutator.
|
||||
|
||||
## 2) APIs
|
||||
|
||||
**IMPORTANT NOTE**: If you use our C/C++ API and you want to increase the size
|
||||
of an **out_buf buffer, you have to use `afl_realloc()` for this, so include
|
||||
`include/alloc-inl.h` - otherwise afl-fuzz will crash when trying to free
|
||||
your buffers.
|
||||
|
||||
C/C++:
|
||||
|
||||
```c
|
||||
void *afl_custom_init(afl_state_t *afl, unsigned int seed);
|
||||
unsigned int afl_custom_fuzz_count(void *data, const unsigned char *buf, size_t buf_size);
|
||||
@ -53,6 +63,7 @@ void afl_custom_deinit(void *data);
|
||||
```
|
||||
|
||||
Python:
|
||||
|
||||
```python
|
||||
def init(seed):
|
||||
pass
|
||||
@ -101,7 +112,8 @@ def deinit(): # optional for Python
|
||||
|
||||
- `init`:
|
||||
|
||||
This method is called when AFL++ starts up and is used to seed RNG and set up buffers and state.
|
||||
This method is called when AFL++ starts up and is used to seed RNG and set
|
||||
up buffers and state.
|
||||
|
||||
- `queue_get` (optional):
|
||||
|
||||
@ -110,27 +122,26 @@ def deinit(): # optional for Python
|
||||
|
||||
- `fuzz_count` (optional):
|
||||
|
||||
When a queue entry is selected to be fuzzed, afl-fuzz selects the number
|
||||
of fuzzing attempts with this input based on a few factors.
|
||||
If however the custom mutator wants to set this number instead on how often
|
||||
it is called for a specific queue entry, use this function.
|
||||
This function is most useful if `AFL_CUSTOM_MUTATOR_ONLY` is **not** used.
|
||||
When a queue entry is selected to be fuzzed, afl-fuzz selects the number of
|
||||
fuzzing attempts with this input based on a few factors. If, however, the
|
||||
custom mutator wants to set this number instead on how often it is called
|
||||
for a specific queue entry, use this function. This function is most useful
|
||||
if `AFL_CUSTOM_MUTATOR_ONLY` is **not** used.
|
||||
|
||||
- `fuzz` (optional):
|
||||
|
||||
This method performs custom mutations on a given input. It also accepts an
|
||||
additional test case.
|
||||
Note that this function is optional - but it makes sense to use it.
|
||||
You would only skip this if `post_process` is used to fix checksums etc.
|
||||
so if you are using it e.g. as a post processing library.
|
||||
additional test case. Note that this function is optional - but it makes
|
||||
sense to use it. You would only skip this if `post_process` is used to fix
|
||||
checksums etc. so if you are using it, e.g., as a post processing library.
|
||||
Note that a length > 0 *must* be returned!
|
||||
|
||||
- `describe` (optional):
|
||||
|
||||
When this function is called, it shall describe the current testcase,
|
||||
generated by the last mutation. This will be called, for example,
|
||||
to name the written testcase file after a crash occurred.
|
||||
Using it can help to reproduce crashing mutations.
|
||||
When this function is called, it shall describe the current test case,
|
||||
generated by the last mutation. This will be called, for example, to name
|
||||
the written test case file after a crash occurred. Using it can help to
|
||||
reproduce crashing mutations.
|
||||
|
||||
- `havoc_mutation` and `havoc_mutation_probability` (optional):
|
||||
|
||||
@ -142,21 +153,25 @@ def deinit(): # optional for Python
|
||||
- `post_process` (optional):
|
||||
|
||||
For some cases, the format of the mutated data returned from the custom
|
||||
mutator is not suitable to directly execute the target with this input.
|
||||
For example, when using libprotobuf-mutator, the data returned is in a
|
||||
protobuf format which corresponds to a given grammar. In order to execute
|
||||
the target, the protobuf data must be converted to the plain-text format
|
||||
expected by the target. In such scenarios, the user can define the
|
||||
`post_process` function. This function is then transforming the data into the
|
||||
format expected by the API before executing the target.
|
||||
mutator is not suitable to directly execute the target with this input. For
|
||||
example, when using libprotobuf-mutator, the data returned is in a protobuf
|
||||
format which corresponds to a given grammar. In order to execute the target,
|
||||
the protobuf data must be converted to the plain-text format expected by the
|
||||
target. In such scenarios, the user can define the `post_process` function.
|
||||
This function is then transforming the data into the format expected by the
|
||||
API before executing the target.
|
||||
|
||||
This can return any python object that implements the buffer protocol and
|
||||
supports PyBUF_SIMPLE. These include bytes, bytearray, etc.
|
||||
|
||||
You can decide in the post_process mutator to not send the mutated data
|
||||
to the target, e.g. if it is too short, too corrupted, etc. If so,
|
||||
return a NULL buffer and zero length (or a 0 length string in Python).
|
||||
|
||||
- `queue_new_entry` (optional):
|
||||
|
||||
This methods is called after adding a new test case to the queue.
|
||||
If the contents of the file was changed return True, False otherwise.
|
||||
This methods is called after adding a new test case to the queue. If the
|
||||
contents of the file was changed, return True, False otherwise.
|
||||
|
||||
- `introspection` (optional):
|
||||
|
||||
@ -168,8 +183,8 @@ def deinit(): # optional for Python
|
||||
|
||||
The last method to be called, deinitializing the state.
|
||||
|
||||
Note that there are also three functions for trimming as described in the
|
||||
next section.
|
||||
Note that there are also three functions for trimming as described in the next
|
||||
section.
|
||||
|
||||
### Trimming Support
|
||||
|
||||
@ -177,8 +192,8 @@ The generic trimming routines implemented in AFL++ can easily destroy the
|
||||
structure of complex formats, possibly leading to a point where you have a lot
|
||||
of test cases in the queue that your Python module cannot process anymore but
|
||||
your target application still accepts. This is especially the case when your
|
||||
target can process a part of the input (causing coverage) and then errors out
|
||||
on the remaining input.
|
||||
target can process a part of the input (causing coverage) and then errors out on
|
||||
the remaining input.
|
||||
|
||||
In such cases, it makes sense to implement a custom trimming routine. The API
|
||||
consists of multiple methods because after each trimming step, we have to go
|
||||
@ -189,8 +204,9 @@ trimmed input. Here's a quick API description:
|
||||
|
||||
This method is called at the start of each trimming operation and receives
|
||||
the initial buffer. It should return the amount of iteration steps possible
|
||||
on this input (e.g. if your input has n elements and you want to remove them
|
||||
one by one, return n, if you do a binary search, return log(n), and so on).
|
||||
on this input (e.g., if your input has n elements and you want to remove
|
||||
them one by one, return n, if you do a binary search, return log(n), and so
|
||||
on).
|
||||
|
||||
If your trimming algorithm doesn't allow to determine the amount of
|
||||
(remaining) steps easily (esp. while running), then you can alternatively
|
||||
@ -202,21 +218,21 @@ trimmed input. Here's a quick API description:
|
||||
- `trim` (optional)
|
||||
|
||||
This method is called for each trimming operation. It doesn't have any
|
||||
arguments because we already have the initial buffer from `init_trim` and we
|
||||
can memorize the current state in the data variables. This can also save
|
||||
arguments because there is already the initial buffer from `init_trim` and
|
||||
we can memorize the current state in the data variables. This can also save
|
||||
reparsing steps for each iteration. It should return the trimmed input
|
||||
buffer.
|
||||
|
||||
- `post_trim` (optional)
|
||||
|
||||
This method is called after each trim operation to inform you if your
|
||||
trimming step was successful or not (in terms of coverage). If you receive
|
||||
a failure here, you should reset your input to the last known good state.
|
||||
In any case, this method must return the next trim iteration index (from 0
|
||||
to the maximum amount of steps you returned in `init_trim`).
|
||||
trimming step was successful or not (in terms of coverage). If you receive a
|
||||
failure here, you should reset your input to the last known good state. In
|
||||
any case, this method must return the next trim iteration index (from 0 to
|
||||
the maximum amount of steps you returned in `init_trim`).
|
||||
|
||||
Omitting any of three trimming methods will cause the trimming to be disabled
|
||||
and trigger a fallback to the builtin default trimming routine.
|
||||
and trigger a fallback to the built-in default trimming routine.
|
||||
|
||||
### Environment Variables
|
||||
|
||||
@ -224,11 +240,10 @@ Optionally, the following environment variables are supported:
|
||||
|
||||
- `AFL_CUSTOM_MUTATOR_ONLY`
|
||||
|
||||
Disable all other mutation stages. This can prevent broken testcases
|
||||
(those that your Python module can't work with anymore) to fill up your
|
||||
queue. Best combined with a custom trimming routine (see below) because
|
||||
trimming can cause the same test breakage like havoc and splice.
|
||||
|
||||
Disable all other mutation stages. This can prevent broken test cases (those
|
||||
that your Python module can't work with anymore) to fill up your queue. Best
|
||||
combined with a custom trimming routine (see below) because trimming can
|
||||
cause the same test breakage like havoc and splice.
|
||||
|
||||
- `AFL_PYTHON_ONLY`
|
||||
|
||||
@ -264,22 +279,27 @@ In case your setup is different, set the necessary variables like this:
|
||||
### Custom Mutator Preparation
|
||||
|
||||
For C/C++ mutators, the source code must be compiled as a shared object:
|
||||
|
||||
```bash
|
||||
gcc -shared -Wall -O3 example.c -o example.so
|
||||
```
|
||||
Note that if you specify multiple custom mutators, the corresponding functions will
|
||||
be called in the order in which they are specified. e.g first `post_process` function of
|
||||
`example_first.so` will be called and then that of `example_second.so`.
|
||||
|
||||
Note that if you specify multiple custom mutators, the corresponding functions
|
||||
will be called in the order in which they are specified. E.g., the first
|
||||
`post_process` function of `example_first.so` will be called and then that of
|
||||
`example_second.so`.
|
||||
|
||||
### Run
|
||||
|
||||
C/C++
|
||||
|
||||
```bash
|
||||
export AFL_CUSTOM_MUTATOR_LIBRARY="/full/path/to/example_first.so;/full/path/to/example_second.so"
|
||||
afl-fuzz /path/to/program
|
||||
```
|
||||
|
||||
Python
|
||||
|
||||
```bash
|
||||
export PYTHONPATH=`dirname /full/path/to/example.py`
|
||||
export AFL_PYTHON_MODULE=example
|
||||
@ -288,8 +308,8 @@ afl-fuzz /path/to/program
|
||||
|
||||
## 4) Example
|
||||
|
||||
Please see [example.c](../custom_mutators/examples/example.c) and
|
||||
[example.py](../custom_mutators/examples/example.py)
|
||||
See [example.c](../custom_mutators/examples/example.c) and
|
||||
[example.py](../custom_mutators/examples/example.py).
|
||||
|
||||
## 5) Other Resources
|
||||
|
||||
@ -297,4 +317,4 @@ Please see [example.c](../custom_mutators/examples/example.c) and
|
||||
- [bruce30262/libprotobuf-mutator_fuzzing_learning](https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator)
|
||||
- [thebabush/afl-libprotobuf-mutator](https://github.com/thebabush/afl-libprotobuf-mutator)
|
||||
- [XML Fuzzing@NullCon 2017](https://www.agarri.fr/docs/XML_Fuzzing-NullCon2017-PUBLIC.pdf)
|
||||
- [A bug detected by AFL + XML-aware mutators](https://bugs.chromium.org/p/chromium/issues/detail?id=930663)
|
||||
- [A bug detected by AFL + XML-aware mutators](https://bugs.chromium.org/p/chromium/issues/detail?id=930663)
|
122
docs/docs.md
@ -1,122 +0,0 @@
|
||||
# Restructure afl++'s documentation
|
||||
|
||||
## About us
|
||||
|
||||
We are dedicated to everything around fuzzing, our main and most well known
|
||||
contribution is the fuzzer `afl++` which is part of all major Unix
|
||||
distributions (e.g. Debian, Arch, FreeBSD, etc.) and is deployed on Google's
|
||||
oss-fuzz and clusterfuzz. It is rated the top fuzzer on Google's fuzzbench.
|
||||
|
||||
We are four individuals from Europe supported by a large community.
|
||||
|
||||
All our tools are open source.
|
||||
|
||||
## About the afl++ fuzzer project
|
||||
|
||||
afl++ inherited it's documentation from the original Google afl project.
|
||||
Since then it has been massively improved - feature and performance wise -
|
||||
and although the documenation has likewise been continued it has grown out
|
||||
of proportion.
|
||||
The documentation is done by non-natives to the English language, plus
|
||||
none of us has a writer background.
|
||||
|
||||
We see questions on afl++ usage on mailing lists (e.g. afl-users), discord
|
||||
channels, web forums and as issues in our repository.
|
||||
|
||||
This only increases as afl++ has been on the top of Google's fuzzbench
|
||||
statistics (which measures the performance of fuzzers) and is now being
|
||||
integrated in Google's oss-fuzz and clusterfuzz - and is in many Unix
|
||||
packaging repositories, e.g. Debian, FreeBSD, etc.
|
||||
|
||||
afl++ now has 44 (!) documentation files with 13k total lines of content.
|
||||
This is way too much.
|
||||
|
||||
Hence afl++ needs a complete overhaul of it's documentation, both on a
|
||||
organisation/structural level as well as the content.
|
||||
|
||||
Overall the following actions have to be performed:
|
||||
* Create a better structure of documentation so it is easier to find the
|
||||
information that is being looked for, combining and/or splitting up the
|
||||
existing documents as needed.
|
||||
* Rewrite some documentation to remove duplication. Several information is
|
||||
present several times in the documentation. These should be removed to
|
||||
where needed so that we have as little bloat as possible.
|
||||
* The documents have been written and modified by a lot of different people,
|
||||
most of them non-native English speaker. Hence an overall review where
|
||||
parts should be rewritten has to be performed and then the rewrite done.
|
||||
* Create a cheat-sheet for a very short best-setup build and run of afl++
|
||||
* Pictures explain more than 1000 words. We need at least 4 images that
|
||||
explain the workflow with afl++:
|
||||
- the build workflow
|
||||
- the fuzzing workflow
|
||||
- the fuzzing campaign management workflow
|
||||
- the overall workflow that is an overview of the above
|
||||
- maybe more? where the technical writes seems it necessary for
|
||||
understanding.
|
||||
|
||||
Requirements:
|
||||
* Documentation has to be in Markdown format
|
||||
* Images have to be either in SVG or PNG format.
|
||||
* All documentation should be (moved) in(to) docs/
|
||||
|
||||
The project does not require writing new documentation or tutorials beside the
|
||||
cheat sheet. The technical information for the cheat sheet will be provided by
|
||||
us.
|
||||
|
||||
## Metrics
|
||||
|
||||
afl++ is a the highest performant fuzzer publicly available - but is also the
|
||||
most feature rich and complex. With the publicity of afl++' success and
|
||||
deployment in Google projects internally and externally and availability as
|
||||
a package on most Linux distributions we see more and more issues being
|
||||
created and help requests on our Discord channel that would not be
|
||||
necessary if people would have read through all our documentation - which
|
||||
is unrealistic.
|
||||
|
||||
We expect the the new documenation after this project to be cleaner, easier
|
||||
accessible and lighter to digest by our users, resulting in much less
|
||||
help requests. On the other hand the amount of users using afl++ should
|
||||
increase as well as it will be more accessible which would also increase
|
||||
questions again - but overall resulting in a reduction of help requests.
|
||||
|
||||
In numbers: we currently have per week on average 5 issues on Github,
|
||||
10 questions on discord and 1 on mailing lists that would not be necessary
|
||||
with perfect documentation and perfect people.
|
||||
|
||||
We would consider this project a success if afterwards we only have
|
||||
2 issues on Github and 3 questions on discord anymore that would be answered
|
||||
by reading the documentation. The mailing list is usually used by the most
|
||||
novice users and we don't expect any less questions there.
|
||||
|
||||
## Project Budget
|
||||
|
||||
We have zero experience with technical writers, so this is very hard for us
|
||||
to calculate. We expect it to be a lot of work though because of the amount
|
||||
of documentation we have that needs to be restructured and partially rewritten
|
||||
(44 documents with 13k total lines of content).
|
||||
|
||||
We assume the daily rate of a very good and experienced technical writer in
|
||||
times of a pandemic to be ~500$ (according to web research), and calculate
|
||||
the overall amout of work to be around 20 days for everything incl. the
|
||||
graphics (but again - this is basically just guessing).
|
||||
|
||||
Technical Writer 10000$
|
||||
Volunteer stipends 0$ (waved)
|
||||
T-Shirts for the top 10 contributors and helpers to this documentation project:
|
||||
10 afl++ logo t-shirts 20$ each 200$
|
||||
10 shipping cost of t-shirts 10$ each 100$
|
||||
|
||||
Total: 10.300$
|
||||
(in the submission form 10.280$ was entered)
|
||||
|
||||
## Additional Information
|
||||
|
||||
We have participated in Google Summer of Code in 2020 and hope to be selected
|
||||
again in 2021.
|
||||
|
||||
We have no experience with a technical writer, but we will support that person
|
||||
with video calls, chats, emails and messaging, provide all necessary information
|
||||
and write technical contents that is required for the success of this project.
|
||||
It is clear to us that a technical writer knows how to write, but cannot know
|
||||
the technical details in a complex tooling like in afl++. This guidance, input,
|
||||
etc. has to come from us.
|
118
docs/features.md
Normal file
@ -0,0 +1,118 @@
|
||||
# Important features of AFL++
|
||||
|
||||
AFL++ supports llvm from 3.8 up to version 12, very fast binary fuzzing with
|
||||
QEMU 5.1 with laf-intel and Redqueen, FRIDA mode, unicorn mode, gcc plugin, full
|
||||
*BSD, Mac OS, Solaris and Android support and much, much, much more.
|
||||
|
||||
## Features and instrumentation
|
||||
|
||||
| Feature/Instrumentation | afl-gcc | llvm | gcc_plugin | FRIDA mode(9) | QEMU mode(10) | unicorn_mode(10) | nyx_mode(12) | coresight_mode(11) |
|
||||
| ------------------------------|:--------:|:---------:|:----------:|:--------------:|:----------------:|:----------------:|:------------:|:------------------:|
|
||||
| Threadsafe counters [A] | | x(3) | | | | | x | |
|
||||
| NeverZero [B] | x86[_64] | x(1) | x | x | x | x | | |
|
||||
| Persistent Mode [C] | | x | x | x86[_64]/arm64 | x86[_64]/arm[64] | x | | |
|
||||
| LAF-Intel / CompCov [D] | | x | | | x86[_64]/arm[64] | x86[_64]/arm[64] | x86[_64] | |
|
||||
| CmpLog [E] | | x | x | x86[_64]/arm64 | x86[_64]/arm[64] | | | |
|
||||
| Selective Instrumentation [F] | | x | x | x | x | | | |
|
||||
| Non-Colliding Coverage [G] | | x(4) | | | (x)(5) | | | |
|
||||
| Ngram prev_loc Coverage [H] | | x(6) | | | | | | |
|
||||
| Context Coverage [I] | | x(6) | | | | | | |
|
||||
| Auto Dictionary [J] | | x(7) | | | | | | |
|
||||
| Snapshot Support [K] | | (x)(8) | (x)(8) | | (x)(5) | | x | |
|
||||
| Shared Memory Test cases [L] | | x | x | x86[_64]/arm64 | x | x | x | |
|
||||
|
||||
## More information about features
|
||||
|
||||
A. Default is not thread-safe coverage counter updates for better performance,
|
||||
see [instrumentation/README.llvm.md](../instrumentation/README.llvm.md)
|
||||
|
||||
B. On wrapping coverage counters (255 + 1), skip the 0 value and jump to 1
|
||||
instead. This has shown to give better coverage data and is the default; see
|
||||
[instrumentation/README.llvm.md](../instrumentation/README.llvm.md).
|
||||
|
||||
C. Instead of forking, reiterate the fuzz target function in a loop (like
|
||||
`LLVMFuzzerTestOneInput`. Great speed increase but only works with target
|
||||
functions that do not keep state, leak memory, or exit; see
|
||||
[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
|
||||
|
||||
D. Split any non-8-bit comparison to 8-bit comparison; see
|
||||
[instrumentation/README.laf-intel.md](../instrumentation/README.laf-intel.md)
|
||||
|
||||
E. CmpLog is our enhanced
|
||||
[Redqueen](https://www.ndss-symposium.org/ndss-paper/redqueen-fuzzing-with-input-to-state-correspondence/)
|
||||
implementation, see
|
||||
[instrumentation/README.cmplog.md](../instrumentation/README.cmplog.md)
|
||||
|
||||
F. Similar and compatible to clang 13+ sancov sanitize-coverage-allow/deny but
|
||||
for all llvm versions and all our compile modes, only instrument what should
|
||||
be instrumented, for more speed, directed fuzzing and less instability; see
|
||||
[instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md)
|
||||
|
||||
G. Vanilla AFL uses coverage where edges could collide to the same coverage
|
||||
bytes the larger the target is. Our default instrumentation in LTO and
|
||||
afl-clang-fast (PCGUARD) uses non-colliding coverage that also makes it
|
||||
faster. Vanilla AFL style is available with `AFL_LLVM_INSTRUMENT=AFL`; see
|
||||
[instrumentation/README.llvm.md](../instrumentation/README.llvm.md).
|
||||
|
||||
H.+I. Alternative coverage based on previous edges (NGRAM) or depending on the
|
||||
caller (CTX), based on
|
||||
[https://www.usenix.org/system/files/raid2019-wang-jinghan.pdf](https://www.usenix.org/system/files/raid2019-wang-jinghan.pdf);
|
||||
see [instrumentation/README.llvm.md](../instrumentation/README.llvm.md).
|
||||
|
||||
J. An LTO feature that creates a fuzzing dictionary based on comparisons found
|
||||
during compilation/instrumentation. Automatic feature :) See
|
||||
[instrumentation/README.lto.md](../instrumentation/README.lto.md)
|
||||
|
||||
K. The snapshot feature requires a kernel module that was a lot of work to get
|
||||
right and maintained so it is no longer supported. We have
|
||||
[nyx_mode](../nyx_mode/README.md) instead.
|
||||
|
||||
L. Faster fuzzing and less kernel syscall overhead by in-memory fuzz testcase
|
||||
delivery, see
|
||||
[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
|
||||
|
||||
## More information about instrumentation
|
||||
|
||||
1. Default for LLVM >= 9.0, environment variable for older version due an
|
||||
efficiency bug in previous llvm versions
|
||||
2. GCC creates non-performant code, hence it is disabled in gcc_plugin
|
||||
3. With `AFL_LLVM_THREADSAFE_INST`, disables NeverZero
|
||||
4. With pcguard mode and LTO mode for LLVM 11 and newer
|
||||
5. Upcoming, development in the branch
|
||||
6. Not compatible with LTO instrumentation and needs at least LLVM v4.1
|
||||
7. Automatic in LTO mode with LLVM 11 and newer, an extra pass for all LLVM
|
||||
versions that write to a file to use with afl-fuzz' `-x`
|
||||
8. The snapshot LKM is currently unmaintained due to too many kernel changes
|
||||
coming too fast :-(
|
||||
9. FRIDA mode is supported on Linux and MacOS for Intel and ARM
|
||||
10. QEMU/Unicorn is only supported on Linux
|
||||
11. Coresight mode is only available on AARCH64 Linux with a CPU with Coresight
|
||||
extension
|
||||
12. Nyx mode is only supported on Linux and currently restricted to x86_x64
|
||||
|
||||
## Integrated features and patches
|
||||
|
||||
Among others, the following features and patches have been integrated:
|
||||
|
||||
* NeverZero patch for afl-gcc, instrumentation, QEMU mode and unicorn_mode which
|
||||
prevents a wrapping map value to zero, increases coverage
|
||||
* Persistent mode, deferred forkserver and in-memory fuzzing for QEMU mode
|
||||
* Unicorn mode which allows fuzzing of binaries from completely different
|
||||
platforms (integration provided by domenukk)
|
||||
* The new CmpLog instrumentation for LLVM and QEMU inspired by
|
||||
[Redqueen](https://github.com/RUB-SysSec/redqueen)
|
||||
* Win32 PE binary-only fuzzing with QEMU and Wine
|
||||
* AFLfast's power schedules by Marcel Böhme:
|
||||
[https://github.com/mboehme/aflfast](https://github.com/mboehme/aflfast)
|
||||
* The MOpt mutator:
|
||||
[https://github.com/puppet-meteor/MOpt-AFL](https://github.com/puppet-meteor/MOpt-AFL)
|
||||
* LLVM mode Ngram coverage by Adrian Herrera
|
||||
[https://github.com/adrianherrera/afl-ngram-pass](https://github.com/adrianherrera/afl-ngram-pass)
|
||||
* LAF-Intel/CompCov support for instrumentation, QEMU mode and unicorn_mode
|
||||
(with enhanced capabilities)
|
||||
* Radamsa and honggfuzz mutators (as custom mutators).
|
||||
* QBDI mode to fuzz android native libraries via Quarkslab's
|
||||
[QBDI](https://github.com/QBDI/QBDI) framework
|
||||
* Frida and ptrace mode to fuzz binary-only libraries, etc.
|
||||
|
||||
So all in all this is the best-of AFL that is out there :-)
|
310
docs/fuzzing_binary-only_targets.md
Normal file
@ -0,0 +1,310 @@
|
||||
# Fuzzing binary-only targets
|
||||
|
||||
AFL++, libfuzzer, and other fuzzers are great if you have the source code of the
|
||||
target. This allows for very fast and coverage guided fuzzing.
|
||||
|
||||
However, if there is only the binary program and no source code available, then
|
||||
standard `afl-fuzz -n` (non-instrumented mode) is not effective.
|
||||
|
||||
For fast, on-the-fly instrumentation of black-box binaries, AFL++ still offers
|
||||
various support. The following is a description of how these binaries can be
|
||||
fuzzed with AFL++.
|
||||
|
||||
## TL;DR:
|
||||
|
||||
FRIDA mode and QEMU mode in persistent mode are the fastest - if persistent mode
|
||||
is possible and the stability is high enough.
|
||||
|
||||
Otherwise, try Zafl, RetroWrite, Dyninst, and if these fail, too, then try
|
||||
standard FRIDA/QEMU mode with `AFL_ENTRYPOINT` to where you need it.
|
||||
|
||||
If your target is non-linux, then use unicorn_mode.
|
||||
|
||||
## Fuzzing binary-only targets with AFL++
|
||||
|
||||
### QEMU mode
|
||||
|
||||
QEMU mode is the "native" solution to the program. It is available in the
|
||||
./qemu_mode/ directory and, once compiled, it can be accessed by the afl-fuzz -Q
|
||||
command line option. It is the easiest to use alternative and even works for
|
||||
cross-platform binaries.
|
||||
|
||||
For linux programs and its libraries, this is accomplished with a version of
|
||||
QEMU running in the lesser-known "user space emulation" mode. QEMU is a project
|
||||
separate from AFL++, but you can conveniently build the feature by doing:
|
||||
|
||||
```shell
|
||||
cd qemu_mode
|
||||
./build_qemu_support.sh
|
||||
```
|
||||
|
||||
The following setup to use QEMU mode is recommended:
|
||||
|
||||
* run 1 afl-fuzz -Q instance with CMPLOG (`-c 0` + `AFL_COMPCOV_LEVEL=2`)
|
||||
* run 1 afl-fuzz -Q instance with QASAN (`AFL_USE_QASAN=1`)
|
||||
* run 1 afl-fuzz -Q instance with LAF (`AFL_PRELOAD=libcmpcov.so` +
|
||||
`AFL_COMPCOV_LEVEL=2`), alternatively you can use FRIDA mode, just switch `-Q`
|
||||
with `-O` and remove the LAF instance
|
||||
|
||||
Then run as many instances as you have cores left with either -Q mode or - even
|
||||
better - use a binary rewriter like Dyninst, RetroWrite, ZAFL, etc.
|
||||
The binary rewriters all have their own advantages and caveats.
|
||||
ZAFL is the best but cannot be used in a business/commercial context.
|
||||
|
||||
If a binary rewriter works for your target then you can use afl-fuzz normally
|
||||
and it will have twice the speed compared to QEMU mode (but slower than QEMU
|
||||
persistent mode).
|
||||
|
||||
The speed decrease of QEMU mode is at about 50%. However, various options exist
|
||||
to increase the speed:
|
||||
- using AFL_ENTRYPOINT to move the forkserver entry to a later basic block in
|
||||
the binary (+5-10% speed)
|
||||
- using persistent mode
|
||||
[qemu_mode/README.persistent.md](../qemu_mode/README.persistent.md) this will
|
||||
result in a 150-300% overall speed increase - so 3-8x the original QEMU mode
|
||||
speed!
|
||||
- using AFL_CODE_START/AFL_CODE_END to only instrument specific parts
|
||||
|
||||
For additional instructions and caveats, see
|
||||
[qemu_mode/README.md](../qemu_mode/README.md). If possible, you should use the
|
||||
persistent mode, see
|
||||
[qemu_mode/README.persistent.md](../qemu_mode/README.persistent.md). The mode is
|
||||
approximately 2-5x slower than compile-time instrumentation, and is less
|
||||
conducive to parallelization.
|
||||
|
||||
Note that there is also honggfuzz:
|
||||
[https://github.com/google/honggfuzz](https://github.com/google/honggfuzz) which
|
||||
now has a QEMU mode, but its performance is just 1.5% ...
|
||||
|
||||
If you like to code a customized fuzzer without much work, we highly recommend
|
||||
to check out our sister project libafl which supports QEMU, too:
|
||||
[https://github.com/AFLplusplus/LibAFL](https://github.com/AFLplusplus/LibAFL)
|
||||
|
||||
### WINE+QEMU
|
||||
|
||||
Wine mode can run Win32 PE binaries with the QEMU instrumentation. It needs
|
||||
Wine, python3, and the pefile python package installed.
|
||||
|
||||
It is included in AFL++.
|
||||
|
||||
For more information, see
|
||||
[qemu_mode/README.wine.md](../qemu_mode/README.wine.md).
|
||||
|
||||
### FRIDA mode
|
||||
|
||||
In FRIDA mode, you can fuzz binary-only targets as easily as with QEMU mode.
|
||||
FRIDA mode is most of the times slightly faster than QEMU mode. It is also
|
||||
newer, lacks COMPCOV, and has the advantage that it works on MacOS (both intel
|
||||
and M1).
|
||||
|
||||
To build FRIDA mode:
|
||||
|
||||
```shell
|
||||
cd frida_mode
|
||||
gmake
|
||||
```
|
||||
|
||||
For additional instructions and caveats, see
|
||||
[frida_mode/README.md](../frida_mode/README.md).
|
||||
|
||||
If possible, you should use the persistent mode, see
|
||||
[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md).
|
||||
The mode is approximately 2-5x slower than compile-time instrumentation, and is
|
||||
less conducive to parallelization. But for binary-only fuzzing, it gives a huge
|
||||
speed improvement if it is possible to use.
|
||||
|
||||
If you want to fuzz a binary-only library, then you can fuzz it with frida-gum
|
||||
via frida_mode/. You will have to write a harness to call the target function in
|
||||
the library, use afl-frida.c as a template.
|
||||
|
||||
You can also perform remote fuzzing with frida, e.g., if you want to fuzz on
|
||||
iPhone or Android devices, for this you can use
|
||||
[https://github.com/ttdennis/fpicker/](https://github.com/ttdennis/fpicker/) as
|
||||
an intermediate that uses AFL++ for fuzzing.
|
||||
|
||||
If you like to code a customized fuzzer without much work, we highly recommend
|
||||
to check out our sister project libafl which supports Frida, too:
|
||||
[https://github.com/AFLplusplus/LibAFL](https://github.com/AFLplusplus/LibAFL).
|
||||
Working examples already exist :-)
|
||||
|
||||
### Nyx mode
|
||||
|
||||
Nyx is a full system emulation fuzzing environment with snapshot support that is
|
||||
built upon KVM and QEMU. It is only available on Linux and currently restricted
|
||||
to x86_x64.
|
||||
|
||||
For binary-only fuzzing a special 5.10 kernel is required.
|
||||
|
||||
See [nyx_mode/README.md](../nyx_mode/README.md).
|
||||
|
||||
### Unicorn
|
||||
|
||||
Unicorn is a fork of QEMU. The instrumentation is, therefore, very similar. In
|
||||
contrast to QEMU, Unicorn does not offer a full system or even userland
|
||||
emulation. Runtime environment and/or loaders have to be written from scratch,
|
||||
if needed. On top, block chaining has been removed. This means the speed boost
|
||||
introduced in the patched QEMU Mode of AFL++ cannot be ported over to Unicorn.
|
||||
|
||||
For non-Linux binaries, you can use AFL++'s unicorn_mode which can emulate
|
||||
anything you want - for the price of speed and user written scripts.
|
||||
|
||||
To build unicorn_mode:
|
||||
|
||||
```shell
|
||||
cd unicorn_mode
|
||||
./build_unicorn_support.sh
|
||||
```
|
||||
|
||||
For further information, check out
|
||||
[unicorn_mode/README.md](../unicorn_mode/README.md).
|
||||
|
||||
### Shared libraries
|
||||
|
||||
If the goal is to fuzz a dynamic library, then there are two options available.
|
||||
For both, you need to write a small harness that loads and calls the library.
|
||||
Then you fuzz this with either FRIDA mode or QEMU mode and either use
|
||||
`AFL_INST_LIBS=1` or `AFL_QEMU/FRIDA_INST_RANGES`.
|
||||
|
||||
Another, less precise and slower option is to fuzz it with utils/afl_untracer/
|
||||
and use afl-untracer.c as a template. It is slower than FRIDA mode.
|
||||
|
||||
For more information, see
|
||||
[utils/afl_untracer/README.md](../utils/afl_untracer/README.md).
|
||||
|
||||
### Coresight
|
||||
|
||||
Coresight is ARM's answer to Intel's PT. With AFL++ v3.15, there is a coresight
|
||||
tracer implementation available in `coresight_mode/` which is faster than QEMU,
|
||||
however, cannot run in parallel. Currently, only one process can be traced, it
|
||||
is WIP.
|
||||
|
||||
Fore more information, see
|
||||
[coresight_mode/README.md](../coresight_mode/README.md).
|
||||
|
||||
## Binary rewriters
|
||||
|
||||
An alternative solution are binary rewriters. They are faster than the solutions
|
||||
native to AFL++ but don't always work.
|
||||
|
||||
### ZAFL
|
||||
|
||||
ZAFL is a static rewriting platform supporting x86-64 C/C++,
|
||||
stripped/unstripped, and PIE/non-PIE binaries. Beyond conventional
|
||||
instrumentation, ZAFL's API enables transformation passes (e.g., laf-Intel,
|
||||
context sensitivity, InsTrim, etc.).
|
||||
|
||||
Its baseline instrumentation speed typically averages 90-95% of
|
||||
afl-clang-fast's.
|
||||
|
||||
[https://git.zephyr-software.com/opensrc/zafl](https://git.zephyr-software.com/opensrc/zafl)
|
||||
|
||||
### RetroWrite
|
||||
|
||||
RetroWrite is a static binary rewriter that can be combined with AFL++. If you
|
||||
have an x86_64 binary that still has its symbols (i.e., not stripped binary), is
|
||||
compiled with position independent code (PIC/PIE), and does not contain C++
|
||||
exceptions, then the RetroWrite solution might be for you. It decompiles to ASM
|
||||
files which can then be instrumented with afl-gcc.
|
||||
|
||||
Binaries that are statically instrumented for fuzzing using RetroWrite are close
|
||||
in performance to compiler-instrumented binaries and outperform the QEMU-based
|
||||
instrumentation.
|
||||
|
||||
[https://github.com/HexHive/retrowrite](https://github.com/HexHive/retrowrite)
|
||||
|
||||
### Dyninst
|
||||
|
||||
Dyninst is a binary instrumentation framework similar to Pintool and DynamoRIO.
|
||||
However, whereas Pintool and DynamoRIO work at runtime, Dyninst instruments the
|
||||
target at load time and then let it run - or save the binary with the changes.
|
||||
This is great for some things, e.g., fuzzing, and not so effective for others,
|
||||
e.g., malware analysis.
|
||||
|
||||
So, what you can do with Dyninst is taking every basic block and putting AFL++'s
|
||||
instrumentation code in there - and then save the binary. Afterwards, just fuzz
|
||||
the newly saved target binary with afl-fuzz. Sounds great? It is. The issue
|
||||
though - it is a non-trivial problem to insert instructions, which change
|
||||
addresses in the process space, so that everything is still working afterwards.
|
||||
Hence, more often than not binaries crash when they are run.
|
||||
|
||||
The speed decrease is about 15-35%, depending on the optimization options used
|
||||
with afl-dyninst.
|
||||
|
||||
[https://github.com/vanhauser-thc/afl-dyninst](https://github.com/vanhauser-thc/afl-dyninst)
|
||||
|
||||
### Mcsema
|
||||
|
||||
Theoretically, you can also decompile to llvm IR with mcsema, and then use
|
||||
llvm_mode to instrument the binary. Good luck with that.
|
||||
|
||||
[https://github.com/lifting-bits/mcsema](https://github.com/lifting-bits/mcsema)
|
||||
|
||||
## Binary tracers
|
||||
|
||||
### Pintool & DynamoRIO
|
||||
|
||||
Pintool and DynamoRIO are dynamic instrumentation engines. They can be used for
|
||||
getting basic block information at runtime. Pintool is only available for Intel
|
||||
x32/x64 on Linux, Mac OS, and Windows, whereas DynamoRIO is additionally
|
||||
available for ARM and AARCH64. DynamoRIO is also 10x faster than Pintool.
|
||||
|
||||
The big issue with DynamoRIO (and therefore Pintool, too) is speed. DynamoRIO
|
||||
has a speed decrease of 98-99%, Pintool has a speed decrease of 99.5%.
|
||||
|
||||
Hence, DynamoRIO is the option to go for if everything else fails and Pintool
|
||||
only if DynamoRIO fails, too.
|
||||
|
||||
DynamoRIO solutions:
|
||||
* [https://github.com/vanhauser-thc/afl-dynamorio](https://github.com/vanhauser-thc/afl-dynamorio)
|
||||
* [https://github.com/mxmssh/drAFL](https://github.com/mxmssh/drAFL)
|
||||
* [https://github.com/googleprojectzero/winafl/](https://github.com/googleprojectzero/winafl/)
|
||||
<= very good but windows only
|
||||
|
||||
Pintool solutions:
|
||||
* [https://github.com/vanhauser-thc/afl-pin](https://github.com/vanhauser-thc/afl-pin)
|
||||
* [https://github.com/mothran/aflpin](https://github.com/mothran/aflpin)
|
||||
* [https://github.com/spinpx/afl_pin_mode](https://github.com/spinpx/afl_pin_mode)
|
||||
<= only old Pintool version supported
|
||||
|
||||
### Intel PT
|
||||
|
||||
If you have a newer Intel CPU, you can make use of Intel's processor trace. The
|
||||
big issue with Intel's PT is the small buffer size and the complex encoding of
|
||||
the debug information collected through PT. This makes the decoding very CPU
|
||||
intensive and hence slow. As a result, the overall speed decrease is about
|
||||
70-90% (depending on the implementation and other factors).
|
||||
|
||||
There are two AFL intel-pt implementations:
|
||||
|
||||
1. [https://github.com/junxzm1990/afl-pt](https://github.com/junxzm1990/afl-pt)
|
||||
=> This needs Ubuntu 14.04.05 without any updates and the 4.4 kernel.
|
||||
|
||||
2. [https://github.com/hunter-ht-2018/ptfuzzer](https://github.com/hunter-ht-2018/ptfuzzer)
|
||||
=> This needs a 4.14 or 4.15 kernel. The "nopti" kernel boot option must be
|
||||
used. This one is faster than the other.
|
||||
|
||||
Note that there is also honggfuzz:
|
||||
[https://github.com/google/honggfuzz](https://github.com/google/honggfuzz). But
|
||||
its IPT performance is just 6%!
|
||||
|
||||
## Non-AFL++ solutions
|
||||
|
||||
There are many binary-only fuzzing frameworks. Some are great for CTFs but don't
|
||||
work with large binaries, others are very slow but have good path discovery,
|
||||
some are very hard to set-up...
|
||||
|
||||
* Jackalope:
|
||||
[https://github.com/googleprojectzero/Jackalope](https://github.com/googleprojectzero/Jackalope)
|
||||
* Manticore:
|
||||
[https://github.com/trailofbits/manticore](https://github.com/trailofbits/manticore)
|
||||
* QSYM:
|
||||
[https://github.com/sslab-gatech/qsym](https://github.com/sslab-gatech/qsym)
|
||||
* S2E: [https://github.com/S2E](https://github.com/S2E)
|
||||
* TinyInst:
|
||||
[https://github.com/googleprojectzero/TinyInst](https://github.com/googleprojectzero/TinyInst)
|
||||
(Mac/Windows only)
|
||||
* ... please send me any missing that are good
|
||||
|
||||
## Closing words
|
||||
|
||||
That's it! News, corrections, updates? Send an email to vh@thc.org.
|
950
docs/fuzzing_in_depth.md
Normal file
@ -0,0 +1,950 @@
|
||||
# Fuzzing with AFL++
|
||||
|
||||
The following describes how to fuzz with a target if source code is available.
|
||||
If you have a binary-only target, go to
|
||||
[fuzzing_binary-only_targets.md](fuzzing_binary-only_targets.md).
|
||||
|
||||
Fuzzing source code is a three-step process:
|
||||
|
||||
1. Compile the target with a special compiler that prepares the target to be
|
||||
fuzzed efficiently. This step is called "instrumenting a target".
|
||||
2. Prepare the fuzzing by selecting and optimizing the input corpus for the
|
||||
target.
|
||||
3. Perform the fuzzing of the target by randomly mutating input and assessing if
|
||||
that input was processed on a new path in the target binary.
|
||||
|
||||
## 0. Common sense risks
|
||||
|
||||
Please keep in mind that, similarly to many other computationally-intensive
|
||||
tasks, fuzzing may put a strain on your hardware and on the OS. In particular:
|
||||
|
||||
- Your CPU will run hot and will need adequate cooling. In most cases, if
|
||||
cooling is insufficient or stops working properly, CPU speeds will be
|
||||
automatically throttled. That said, especially when fuzzing on less suitable
|
||||
hardware (laptops, smartphones, etc.), it's not entirely impossible for
|
||||
something to blow up.
|
||||
|
||||
- Targeted programs may end up erratically grabbing gigabytes of memory or
|
||||
filling up disk space with junk files. AFL++ tries to enforce basic memory
|
||||
limits, but can't prevent each and every possible mishap. The bottom line is
|
||||
that you shouldn't be fuzzing on systems where the prospect of data loss is
|
||||
not an acceptable risk.
|
||||
|
||||
- Fuzzing involves billions of reads and writes to the filesystem. On modern
|
||||
systems, this will be usually heavily cached, resulting in fairly modest
|
||||
"physical" I/O - but there are many factors that may alter this equation. It
|
||||
is your responsibility to monitor for potential trouble; with very heavy I/O,
|
||||
the lifespan of many HDDs and SSDs may be reduced.
|
||||
|
||||
A good way to monitor disk I/O on Linux is the `iostat` command:
|
||||
|
||||
```shell
|
||||
$ iostat -d 3 -x -k [...optional disk ID...]
|
||||
```
|
||||
|
||||
Using the `AFL_TMPDIR` environment variable and a RAM-disk, you can have the
|
||||
heavy writing done in RAM to prevent the aforementioned wear and tear. For
|
||||
example, the following line will run a Docker container with all this preset:
|
||||
|
||||
```shell
|
||||
# docker run -ti --mount type=tmpfs,destination=/ramdisk -e AFL_TMPDIR=/ramdisk aflplusplus/aflplusplus
|
||||
```
|
||||
|
||||
## 1. Instrumenting the target
|
||||
|
||||
### a) Selecting the best AFL++ compiler for instrumenting the target
|
||||
|
||||
AFL++ comes with a central compiler `afl-cc` that incorporates various different
|
||||
kinds of compiler targets and instrumentation options. The following
|
||||
evaluation flow will help you to select the best possible.
|
||||
|
||||
It is highly recommended to have the newest llvm version possible installed,
|
||||
anything below 9 is not recommended.
|
||||
|
||||
```
|
||||
+--------------------------------+
|
||||
| clang/clang++ 11+ is available | --> use LTO mode (afl-clang-lto/afl-clang-lto++)
|
||||
+--------------------------------+ see [instrumentation/README.lto.md](instrumentation/README.lto.md)
|
||||
|
|
||||
| if not, or if the target fails with LTO afl-clang-lto/++
|
||||
|
|
||||
v
|
||||
+---------------------------------+
|
||||
| clang/clang++ 3.8+ is available | --> use LLVM mode (afl-clang-fast/afl-clang-fast++)
|
||||
+---------------------------------+ see [instrumentation/README.llvm.md](instrumentation/README.llvm.md)
|
||||
|
|
||||
| if not, or if the target fails with LLVM afl-clang-fast/++
|
||||
|
|
||||
v
|
||||
+--------------------------------+
|
||||
| gcc 5+ is available | -> use GCC_PLUGIN mode (afl-gcc-fast/afl-g++-fast)
|
||||
+--------------------------------+ see [instrumentation/README.gcc_plugin.md](instrumentation/README.gcc_plugin.md) and
|
||||
[instrumentation/README.instrument_list.md](instrumentation/README.instrument_list.md)
|
||||
|
|
||||
| if not, or if you do not have a gcc with plugin support
|
||||
|
|
||||
v
|
||||
use GCC mode (afl-gcc/afl-g++) (or afl-clang/afl-clang++ for clang)
|
||||
```
|
||||
|
||||
Clickable README links for the chosen compiler:
|
||||
|
||||
* [LTO mode - afl-clang-lto](../instrumentation/README.lto.md)
|
||||
* [LLVM mode - afl-clang-fast](../instrumentation/README.llvm.md)
|
||||
* [GCC_PLUGIN mode - afl-gcc-fast](../instrumentation/README.gcc_plugin.md)
|
||||
* GCC/CLANG modes (afl-gcc/afl-clang) have no README as they have no own
|
||||
features
|
||||
|
||||
You can select the mode for the afl-cc compiler by one of the following methods:
|
||||
|
||||
* Using a symlink to afl-cc: afl-gcc, afl-g++, afl-clang, afl-clang++,
|
||||
afl-clang-fast, afl-clang-fast++, afl-clang-lto, afl-clang-lto++,
|
||||
afl-gcc-fast, afl-g++-fast (recommended!).
|
||||
* Using the environment variable `AFL_CC_COMPILER` with `MODE`.
|
||||
* Passing --afl-`MODE` command line options to the compiler via
|
||||
`CFLAGS`/`CXXFLAGS`/`CPPFLAGS`.
|
||||
|
||||
`MODE` can be one of the following:
|
||||
|
||||
* LTO (afl-clang-lto*)
|
||||
* LLVM (afl-clang-fast*)
|
||||
* GCC_PLUGIN (afl-g*-fast) or GCC (afl-gcc/afl-g++)
|
||||
* CLANG(afl-clang/afl-clang++)
|
||||
|
||||
Because no AFL++ specific command-line options are accepted (beside the
|
||||
--afl-MODE command), the compile-time tools make fairly broad use of environment
|
||||
variables, which can be listed with `afl-cc -hh` or looked up in
|
||||
[env_variables.md](env_variables.md).
|
||||
|
||||
### b) Selecting instrumentation options
|
||||
|
||||
If you instrument with LTO mode (afl-clang-fast/afl-clang-lto), the following
|
||||
options are available:
|
||||
|
||||
* Splitting integer, string, float, and switch comparisons so AFL++ can easier
|
||||
solve these. This is an important option if you do not have a very good and
|
||||
large input corpus. This technique is called laf-intel or COMPCOV. To use
|
||||
this, set the following environment variable before compiling the target:
|
||||
`export AFL_LLVM_LAF_ALL=1`. You can read more about this in
|
||||
[instrumentation/README.laf-intel.md](../instrumentation/README.laf-intel.md).
|
||||
* A different technique (and usually a better one than laf-intel) is to
|
||||
instrument the target so that any compare values in the target are sent to
|
||||
AFL++ which then tries to put these values into the fuzzing data at different
|
||||
locations. This technique is very fast and good - if the target does not
|
||||
transform input data before comparison. Therefore, this technique is called
|
||||
`input to state` or `redqueen`. If you want to use this technique, then you
|
||||
have to compile the target twice, once specifically with/for this mode by
|
||||
setting `AFL_LLVM_CMPLOG=1`, and pass this binary to afl-fuzz via the `-c`
|
||||
parameter. Note that you can compile also just a cmplog binary and use that
|
||||
for both, however, there will be a performance penalty. You can read more
|
||||
about this in
|
||||
[instrumentation/README.cmplog.md](../instrumentation/README.cmplog.md).
|
||||
|
||||
If you use LTO, LLVM, or GCC_PLUGIN mode
|
||||
(afl-clang-fast/afl-clang-lto/afl-gcc-fast), you have the option to selectively
|
||||
instrument _parts_ of the target that you are interested in. For afl-clang-fast,
|
||||
you have to use an llvm version newer than 10.0.0 or a mode other than
|
||||
DEFAULT/PCGUARD.
|
||||
|
||||
This step can be done either by explicitly including parts to be instrumented or
|
||||
by explicitly excluding parts from instrumentation.
|
||||
|
||||
* To instrument _only specified parts_, create a file (e.g., `allowlist.txt`)
|
||||
with all the filenames and/or functions of the source code that should be
|
||||
instrumented and then:
|
||||
|
||||
1. Just put one filename or function (prefixing with `fun: `) per line (no
|
||||
directory information necessary for filenames) in the file `allowlist.txt`.
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
foo.cpp # will match foo/foo.cpp, bar/foo.cpp, barfoo.cpp etc.
|
||||
fun: foo_func # will match the function foo_func
|
||||
```
|
||||
|
||||
2. Set `export AFL_LLVM_ALLOWLIST=allowlist.txt` to enable selective positive
|
||||
instrumentation.
|
||||
|
||||
* Similarly to _exclude_ specified parts from instrumentation, create a file
|
||||
(e.g., `denylist.txt`) with all the filenames of the source code that should
|
||||
be skipped during instrumentation and then:
|
||||
|
||||
1. Same as above. Just put one filename or function per line in the file
|
||||
`denylist.txt`.
|
||||
|
||||
2. Set `export AFL_LLVM_DENYLIST=denylist.txt` to enable selective negative
|
||||
instrumentation.
|
||||
|
||||
**NOTE:** During optimization functions might be
|
||||
inlined and then would not match the list! See
|
||||
[instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
|
||||
|
||||
There are many more options and modes available, however, these are most of the
|
||||
time less effective. See:
|
||||
|
||||
* [instrumentation/README.llvm.md#6) AFL++ Context Sensitive Branch Coverage](../instrumentation/README.llvm.md#6-afl-context-sensitive-branch-coverage)
|
||||
* [instrumentation/README.llvm.md#7) AFL++ N-Gram Branch Coverage](../instrumentation/README.llvm.md#7-afl-n-gram-branch-coverage)
|
||||
|
||||
AFL++ performs "never zero" counting in its bitmap. You can read more about this
|
||||
here:
|
||||
* [instrumentation/README.llvm.md#8-neverzero-counters](../instrumentation/README.llvm.md#8-neverzero-counters)
|
||||
|
||||
### c) Selecting sanitizers
|
||||
|
||||
It is possible to use sanitizers when instrumenting targets for fuzzing, which
|
||||
allows you to find bugs that would not necessarily result in a crash.
|
||||
|
||||
Note that sanitizers have a huge impact on CPU (= less executions per second)
|
||||
and RAM usage. Also, you should only run one afl-fuzz instance per sanitizer
|
||||
type. This is enough because e.g. a use-after-free bug will be picked up by ASAN
|
||||
(address sanitizer) anyway after syncing test cases from other fuzzing
|
||||
instances, so running more than one address sanitized target would be a waste.
|
||||
|
||||
The following sanitizers have built-in support in AFL++:
|
||||
|
||||
* ASAN = Address SANitizer, finds memory corruption vulnerabilities like
|
||||
use-after-free, NULL pointer dereference, buffer overruns, etc. Enabled with
|
||||
`export AFL_USE_ASAN=1` before compiling.
|
||||
* MSAN = Memory SANitizer, finds read accesses to uninitialized memory, e.g., a
|
||||
local variable that is defined and read before it is even set. Enabled with
|
||||
`export AFL_USE_MSAN=1` before compiling.
|
||||
* UBSAN = Undefined Behavior SANitizer, finds instances where - by the C and C++
|
||||
standards - undefined behavior happens, e.g., adding two signed integers where
|
||||
the result is larger than what a signed integer can hold. Enabled with `export
|
||||
AFL_USE_UBSAN=1` before compiling.
|
||||
* CFISAN = Control Flow Integrity SANitizer, finds instances where the control
|
||||
flow is found to be illegal. Originally this was rather to prevent return
|
||||
oriented programming (ROP) exploit chains from functioning. In fuzzing, this
|
||||
is mostly reduced to detecting type confusion vulnerabilities - which is,
|
||||
however, one of the most important and dangerous C++ memory corruption
|
||||
classes! Enabled with `export AFL_USE_CFISAN=1` before compiling.
|
||||
* TSAN = Thread SANitizer, finds thread race conditions. Enabled with `export
|
||||
AFL_USE_TSAN=1` before compiling.
|
||||
* LSAN = Leak SANitizer, finds memory leaks in a program. This is not really a
|
||||
security issue, but for developers this can be very valuable. Note that unlike
|
||||
the other sanitizers above this needs `__AFL_LEAK_CHECK();` added to all areas
|
||||
of the target source code where you find a leak check necessary! Enabled with
|
||||
`export AFL_USE_LSAN=1` before compiling. To ignore the memory-leaking check
|
||||
for certain allocations, `__AFL_LSAN_OFF();` can be used before memory is
|
||||
allocated, and `__AFL_LSAN_ON();` afterwards. Memory allocated between these
|
||||
two macros will not be checked for memory leaks.
|
||||
|
||||
It is possible to further modify the behavior of the sanitizers at run-time by
|
||||
setting `ASAN_OPTIONS=...`, `LSAN_OPTIONS` etc. - the available parameters can
|
||||
be looked up in the sanitizer documentation of llvm/clang. afl-fuzz, however,
|
||||
requires some specific parameters important for fuzzing to be set. If you want
|
||||
to set your own, it might bail and report what it is missing.
|
||||
|
||||
Note that some sanitizers cannot be used together, e.g., ASAN and MSAN, and
|
||||
others often cannot work together because of target weirdness, e.g., ASAN and
|
||||
CFISAN. You might need to experiment which sanitizers you can combine in a
|
||||
target (which means more instances can be run without a sanitized target, which
|
||||
is more effective).
|
||||
|
||||
### d) Modifying the target
|
||||
|
||||
If the target has features that make fuzzing more difficult, e.g., checksums,
|
||||
HMAC, etc., then modify the source code so that checks for these values are
|
||||
removed. This can even be done safely for source code used in operational
|
||||
products by eliminating these checks within these AFL++ specific blocks:
|
||||
|
||||
```
|
||||
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
|
||||
// say that the checksum or HMAC was fine - or whatever is required
|
||||
// to eliminate the need for the fuzzer to guess the right checksum
|
||||
return 0;
|
||||
#endif
|
||||
```
|
||||
|
||||
All AFL++ compilers will set this preprocessor definition automatically.
|
||||
|
||||
### e) Instrumenting the target
|
||||
|
||||
In this step, the target source code is compiled so that it can be fuzzed.
|
||||
|
||||
Basically, you have to tell the target build system that the selected AFL++
|
||||
compiler is used. Also - if possible - you should always configure the build
|
||||
system in such way that the target is compiled statically and not dynamically.
|
||||
How to do this is described below.
|
||||
|
||||
The #1 rule when instrumenting a target is: avoid instrumenting shared libraries
|
||||
at all cost. You would need to set `LD_LIBRARY_PATH` to point to these, you
|
||||
could accidentally type "make install" and install them system wide - so don't.
|
||||
Really don't. **Always compile libraries you want to have instrumented as static
|
||||
and link these to the target program!**
|
||||
|
||||
Then build the target. (Usually with `make`.)
|
||||
|
||||
**NOTES**
|
||||
|
||||
1. Sometimes configure and build systems are fickle and do not like stderr
|
||||
output (and think this means a test failure) - which is something AFL++ likes
|
||||
to do to show statistics. It is recommended to disable AFL++ instrumentation
|
||||
reporting via `export AFL_QUIET=1`.
|
||||
|
||||
2. Sometimes configure and build systems error on warnings - these should be
|
||||
disabled (e.g., `--disable-werror` for some configure scripts).
|
||||
|
||||
3. In case the configure/build system complains about AFL++'s compiler and
|
||||
aborts, then set `export AFL_NOOPT=1` which will then just behave like the
|
||||
real compiler and run the configure step separately. For building the target
|
||||
afterwards this option has to be unset again!
|
||||
|
||||
#### configure
|
||||
|
||||
For `configure` build systems, this is usually done by:
|
||||
|
||||
```
|
||||
CC=afl-clang-fast CXX=afl-clang-fast++ ./configure --disable-shared
|
||||
```
|
||||
|
||||
Note that if you are using the (better) afl-clang-lto compiler, you also have to
|
||||
set `AR` to llvm-ar[-VERSION] and `RANLIB` to llvm-ranlib[-VERSION] - as is
|
||||
described in [instrumentation/README.lto.md](../instrumentation/README.lto.md).
|
||||
|
||||
#### CMake
|
||||
|
||||
For CMake build systems, this is usually done by:
|
||||
|
||||
```
|
||||
mkdir build; cd build; cmake -DCMAKE_C_COMPILER=afl-cc -DCMAKE_CXX_COMPILER=afl-c++ ..
|
||||
```
|
||||
|
||||
Note that if you are using the (better) afl-clang-lto compiler you also have to
|
||||
set AR to llvm-ar[-VERSION] and RANLIB to llvm-ranlib[-VERSION] - as is
|
||||
described in [instrumentation/README.lto.md](../instrumentation/README.lto.md).
|
||||
|
||||
#### Meson Build System
|
||||
|
||||
For the Meson Build System, you have to set the AFL++ compiler with the very
|
||||
first command!
|
||||
|
||||
```
|
||||
CC=afl-cc CXX=afl-c++ meson
|
||||
```
|
||||
|
||||
#### Other build systems or if configure/cmake didn't work
|
||||
|
||||
Sometimes `cmake` and `configure` do not pick up the AFL++ compiler or the
|
||||
`RANLIB`/`AR` that is needed - because this was just not foreseen by the
|
||||
developer of the target. Or they have non-standard options. Figure out if there
|
||||
is a non-standard way to set this, otherwise set up the build normally and edit
|
||||
the generated build environment afterwards manually to point it to the right
|
||||
compiler (and/or `RANLIB` and `AR`).
|
||||
|
||||
In complex, weird, alien build systems you can try this neat project:
|
||||
[https://github.com/fuzzah/exeptor](https://github.com/fuzzah/exeptor)
|
||||
|
||||
#### Linker scripts
|
||||
|
||||
If the project uses linker scripts to hide the symbols exported by the
|
||||
binary, then you may see errors such as:
|
||||
|
||||
```
|
||||
undefined symbol: __afl_area_ptr
|
||||
```
|
||||
|
||||
The solution is to modify the linker script to add:
|
||||
|
||||
```
|
||||
{
|
||||
global:
|
||||
__afl_*;
|
||||
}
|
||||
```
|
||||
|
||||
### f) Better instrumentation
|
||||
|
||||
If you just fuzz a target program as-is, you are wasting a great opportunity for
|
||||
much more fuzzing speed.
|
||||
|
||||
This variant requires the usage of afl-clang-lto, afl-clang-fast or
|
||||
afl-gcc-fast.
|
||||
|
||||
It is the so-called `persistent mode`, which is much, much faster but requires
|
||||
that you code a source file that is specifically calling the target functions
|
||||
that you want to fuzz, plus a few specific AFL++ functions around it. See
|
||||
[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
|
||||
for details.
|
||||
|
||||
Basically, if you do not fuzz a target in persistent mode, then you are just
|
||||
doing it for a hobby and not professionally :-).
|
||||
|
||||
### g) libfuzzer fuzzer harnesses with LLVMFuzzerTestOneInput()
|
||||
|
||||
libfuzzer `LLVMFuzzerTestOneInput()` harnesses are the defacto standard for
|
||||
fuzzing, and they can be used with AFL++ (and honggfuzz) as well!
|
||||
|
||||
Compiling them is as simple as:
|
||||
|
||||
```
|
||||
afl-clang-fast++ -fsanitize=fuzzer -o harness harness.cpp targetlib.a
|
||||
```
|
||||
|
||||
You can even use advanced libfuzzer features like `FuzzedDataProvider`,
|
||||
`LLVMFuzzerInitialize()` etc. and they will work!
|
||||
|
||||
The generated binary is fuzzed with afl-fuzz like any other fuzz target.
|
||||
|
||||
Bonus: the target is already optimized for fuzzing due to persistent mode and
|
||||
shared-memory test cases and hence gives you the fastest speed possible.
|
||||
|
||||
For more information, see
|
||||
[utils/aflpp_driver/README.md](../utils/aflpp_driver/README.md).
|
||||
|
||||
## 2. Preparing the fuzzing campaign
|
||||
|
||||
As you fuzz the target with mutated input, having as diverse inputs for the
|
||||
target as possible improves the efficiency a lot.
|
||||
|
||||
### a) Collecting inputs
|
||||
|
||||
To operate correctly, the fuzzer requires one or more starting files that
|
||||
contain a good example of the input data normally expected by the targeted
|
||||
application.
|
||||
|
||||
Try to gather valid inputs for the target from wherever you can. E.g., if it is
|
||||
the PNG picture format, try to find as many PNG files as possible, e.g., from
|
||||
reported bugs, test suites, random downloads from the internet, unit test case
|
||||
data - from all kind of PNG software.
|
||||
|
||||
If the input format is not known, you can also modify a target program to write
|
||||
normal data it receives and processes to a file and use these.
|
||||
|
||||
You can find many good examples of starting files in the
|
||||
[testcases/](../testcases) subdirectory that comes with this tool.
|
||||
|
||||
### b) Making the input corpus unique
|
||||
|
||||
Use the AFL++ tool `afl-cmin` to remove inputs from the corpus that do not
|
||||
produce a new path/coverage in the target:
|
||||
|
||||
1. Put all files from [step a](#a-collecting-inputs) into one directory, e.g.,
|
||||
`INPUTS`.
|
||||
2. Run afl-cmin:
|
||||
* If the target program is to be called by fuzzing as `bin/target INPUTFILE`,
|
||||
replace the INPUTFILE argument that the target program would read from with
|
||||
`@@`:
|
||||
|
||||
```
|
||||
afl-cmin -i INPUTS -o INPUTS_UNIQUE -- bin/target -someopt @@
|
||||
```
|
||||
|
||||
* If the target reads from stdin (standard input) instead, just omit the `@@`
|
||||
as this is the default:
|
||||
|
||||
```
|
||||
afl-cmin -i INPUTS -o INPUTS_UNIQUE -- bin/target -someopt
|
||||
```
|
||||
|
||||
This step is highly recommended, because afterwards the testcase corpus is not
|
||||
bloated with duplicates anymore, which would slow down the fuzzing progress!
|
||||
|
||||
### c) Minimizing all corpus files
|
||||
|
||||
The shorter the input files that still traverse the same path within the target,
|
||||
the better the fuzzing will be. This minimization is done with `afl-tmin`,
|
||||
however, it is a long process as this has to be done for every file:
|
||||
|
||||
```
|
||||
mkdir input
|
||||
cd INPUTS_UNIQUE
|
||||
for i in *; do
|
||||
afl-tmin -i "$i" -o "../input/$i" -- bin/target -someopt @@
|
||||
done
|
||||
```
|
||||
|
||||
This step can also be parallelized, e.g., with `parallel`.
|
||||
|
||||
Note that this step is rather optional though.
|
||||
|
||||
### Done!
|
||||
|
||||
The INPUTS_UNIQUE/ directory from [step b](#b-making-the-input-corpus-unique) -
|
||||
or even better the directory input/ if you minimized the corpus in
|
||||
[step c](#c-minimizing-all-corpus-files) - is the resulting input corpus
|
||||
directory to be used in fuzzing! :-)
|
||||
|
||||
## 3. Fuzzing the target
|
||||
|
||||
In this final step, fuzz the target. There are not that many important options
|
||||
to run the target - unless you want to use many CPU cores/threads for the
|
||||
fuzzing, which will make the fuzzing much more useful.
|
||||
|
||||
If you just use one instance for fuzzing, then you are fuzzing just for fun and
|
||||
not seriously :-)
|
||||
|
||||
### a) Running afl-fuzz
|
||||
|
||||
Before you do even a test run of afl-fuzz, execute `sudo afl-system-config` (on
|
||||
the host if you execute afl-fuzz in a Docker container). This reconfigures the
|
||||
system for optimal speed - which afl-fuzz checks and bails otherwise. Set
|
||||
`export AFL_SKIP_CPUFREQ=1` for afl-fuzz to skip this check if you cannot run
|
||||
afl-system-config with root privileges on the host for whatever reason.
|
||||
|
||||
Note:
|
||||
|
||||
* There is also `sudo afl-persistent-config` which sets additional permanent
|
||||
boot options for a much better fuzzing performance.
|
||||
* Both scripts improve your fuzzing performance but also decrease your system
|
||||
protection against attacks! So set strong firewall rules and only expose SSH
|
||||
as a network service if you use these (which is highly recommended).
|
||||
|
||||
If you have an input corpus from [step 2](#2-preparing-the-fuzzing-campaign),
|
||||
then specify this directory with the `-i` option. Otherwise, create a new
|
||||
directory and create a file with any content as test data in there.
|
||||
|
||||
If you do not want anything special, the defaults are already usually best,
|
||||
hence all you need is to specify the seed input directory with the result of
|
||||
step [2a) Collecting inputs](#a-collecting-inputs):
|
||||
|
||||
```
|
||||
afl-fuzz -i input -o output -- bin/target -someopt @@
|
||||
```
|
||||
|
||||
Note that the directory specified with `-o` will be created if it does not
|
||||
exist.
|
||||
|
||||
It can be valuable to run afl-fuzz in a `screen` or `tmux` shell so you can log
|
||||
off, or afl-fuzz is not aborted if you are running it in a remote ssh session
|
||||
where the connection fails in between. Only do that though once you have
|
||||
verified that your fuzzing setup works! Run it like `screen -dmS afl-main --
|
||||
afl-fuzz -M main-$HOSTNAME -i ...` and it will start away in a screen session.
|
||||
To enter this session, type `screen -r afl-main`. You see - it makes sense to
|
||||
name the screen session same as the afl-fuzz `-M`/`-S` naming :-) For more
|
||||
information on screen or tmux, check their documentation.
|
||||
|
||||
If you need to stop and re-start the fuzzing, use the same command line options
|
||||
(or even change them by selecting a different power schedule or another mutation
|
||||
mode!) and switch the input directory with a dash (`-`):
|
||||
|
||||
```
|
||||
afl-fuzz -i - -o output -- bin/target -someopt @@
|
||||
```
|
||||
|
||||
Adding a dictionary is helpful. You have to following options:
|
||||
|
||||
* See the directory
|
||||
[dictionaries/](../dictionaries/), if something is already included for your
|
||||
data format, and tell afl-fuzz to load that dictionary by adding `-x
|
||||
dictionaries/FORMAT.dict`.
|
||||
* With `afl-clang-lto`, you have an autodictionary generation for which you need
|
||||
to do nothing except to use afl-clang-lto as the compiler.
|
||||
* With `afl-clang-fast`, you can set
|
||||
`AFL_LLVM_DICT2FILE=/full/path/to/new/file.dic` to automatically generate a
|
||||
dictionary during target compilation.
|
||||
* You also have the option to generate a dictionary yourself during an
|
||||
independent run of the target, see
|
||||
[utils/libtokencap/README.md](../utils/libtokencap/README.md).
|
||||
* Finally, you can also write a dictionary file manually, of course.
|
||||
|
||||
afl-fuzz has a variety of options that help to workaround target quirks like
|
||||
very specific locations for the input file (`-f`), performing deterministic
|
||||
fuzzing (`-D`) and many more. Check out `afl-fuzz -h`.
|
||||
|
||||
We highly recommend that you set a memory limit for running the target with `-m`
|
||||
which defines the maximum memory in MB. This prevents a potential out-of-memory
|
||||
problem for your system plus helps you detect missing `malloc()` failure
|
||||
handling in the target. Play around with various `-m` values until you find one
|
||||
that safely works for all your input seeds (if you have good ones and then
|
||||
double or quadruple that).
|
||||
|
||||
By default, afl-fuzz never stops fuzzing. To terminate AFL++, press Control-C or
|
||||
send a signal SIGINT. You can limit the number of executions or approximate
|
||||
runtime in seconds with options also.
|
||||
|
||||
When you start afl-fuzz, you will see a user interface that shows what the
|
||||
status is:
|
||||
|
||||

|
||||
|
||||
All labels are explained in
|
||||
[afl-fuzz_approach.md#understanding-the-status-screen](afl-fuzz_approach.md#understanding-the-status-screen).
|
||||
|
||||
### b) Keeping memory use and timeouts in check
|
||||
|
||||
Memory limits are not enforced by afl-fuzz by default and the system may run out
|
||||
of memory. You can decrease the memory with the `-m` option, the value is in MB.
|
||||
If this is too small for the target, you can usually see this by afl-fuzz
|
||||
bailing with the message that it could not connect to the forkserver.
|
||||
|
||||
Consider setting low values for `-m` and `-t`.
|
||||
|
||||
For programs that are nominally very fast, but get sluggish for some inputs, you
|
||||
can also try setting `-t` values that are more punishing than what `afl-fuzz`
|
||||
dares to use on its own. On fast and idle machines, going down to `-t 5` may be
|
||||
a viable plan.
|
||||
|
||||
The `-m` parameter is worth looking at, too. Some programs can end up spending a
|
||||
fair amount of time allocating and initializing megabytes of memory when
|
||||
presented with pathological inputs. Low `-m` values can make them give up sooner
|
||||
and not waste CPU time.
|
||||
|
||||
### c) Using multiple cores
|
||||
|
||||
If you want to seriously fuzz, then use as many cores/threads as possible to
|
||||
fuzz your target.
|
||||
|
||||
On the same machine - due to the design of how AFL++ works - there is a maximum
|
||||
number of CPU cores/threads that are useful, use more and the overall
|
||||
performance degrades instead. This value depends on the target, and the limit is
|
||||
between 32 and 64 cores per machine.
|
||||
|
||||
If you have the RAM, it is highly recommended run the instances with a caching
|
||||
of the test cases. Depending on the average test case size (and those found
|
||||
during fuzzing) and their number, a value between 50-500MB is recommended. You
|
||||
can set the cache size (in MB) by setting the environment variable
|
||||
`AFL_TESTCACHE_SIZE`.
|
||||
|
||||
There should be one main fuzzer (`-M main-$HOSTNAME` option) and as many
|
||||
secondary fuzzers (e.g., `-S variant1`) as you have cores that you use. Every
|
||||
`-M`/`-S` entry needs a unique name (that can be whatever), however, the same
|
||||
`-o` output directory location has to be used for all instances.
|
||||
|
||||
For every secondary fuzzer there should be a variation, e.g.:
|
||||
* one should fuzz the target that was compiled differently: with sanitizers
|
||||
activated (`export AFL_USE_ASAN=1 ; export AFL_USE_UBSAN=1 ; export
|
||||
AFL_USE_CFISAN=1`)
|
||||
* one or two should fuzz the target with CMPLOG/redqueen (see above), at least
|
||||
one cmplog instance should follow transformations (`-l AT`)
|
||||
* one to three fuzzers should fuzz a target compiled with laf-intel/COMPCOV (see
|
||||
above). Important note: If you run more than one laf-intel/COMPCOV fuzzer and
|
||||
you want them to share their intermediate results, the main fuzzer (`-M`) must
|
||||
be one of them! (Although this is not really recommended.)
|
||||
|
||||
All other secondaries should be used like this:
|
||||
* a quarter to a third with the MOpt mutator enabled: `-L 0`
|
||||
* run with a different power schedule, recommended are: `fast` (default),
|
||||
`explore`, `coe`, `lin`, `quad`, `exploit`, and `rare` which you can set with
|
||||
the `-p` option, e.g., `-p explore`. See the
|
||||
[FAQ](FAQ.md#what-are-power-schedules) for details.
|
||||
* a few instances should use the old queue cycling with `-Z`
|
||||
|
||||
Also, it is recommended to set `export AFL_IMPORT_FIRST=1` to load test cases
|
||||
from other fuzzers in the campaign first.
|
||||
|
||||
If you have a large corpus, a corpus from a previous run or are fuzzing in a CI,
|
||||
then also set `export AFL_CMPLOG_ONLY_NEW=1` and `export AFL_FAST_CAL=1`.
|
||||
|
||||
You can also use different fuzzers. If you are using AFL spinoffs or AFL
|
||||
conforming fuzzers, then just use the same -o directory and give it a unique
|
||||
`-S` name. Examples are:
|
||||
* [Fuzzolic](https://github.com/season-lab/fuzzolic)
|
||||
* [symcc](https://github.com/eurecom-s3/symcc/)
|
||||
* [Eclipser](https://github.com/SoftSec-KAIST/Eclipser/)
|
||||
* [AFLsmart](https://github.com/aflsmart/aflsmart)
|
||||
* [FairFuzz](https://github.com/carolemieux/afl-rb)
|
||||
* [Neuzz](https://github.com/Dongdongshe/neuzz)
|
||||
* [Angora](https://github.com/AngoraFuzzer/Angora)
|
||||
|
||||
A long list can be found at
|
||||
[https://github.com/Microsvuln/Awesome-AFL](https://github.com/Microsvuln/Awesome-AFL).
|
||||
|
||||
However, you can also sync AFL++ with honggfuzz, libfuzzer with `-entropic=1`,
|
||||
etc. Just show the main fuzzer (`-M`) with the `-F` option where the queue/work
|
||||
directory of a different fuzzer is, e.g., `-F /src/target/honggfuzz`. Using
|
||||
honggfuzz (with `-n 1` or `-n 2`) and libfuzzer in parallel is highly
|
||||
recommended!
|
||||
|
||||
### d) Using multiple machines for fuzzing
|
||||
|
||||
Maybe you have more than one machine you want to fuzz the same target on. Start
|
||||
the `afl-fuzz` (and perhaps libfuzzer, honggfuzz, ...) orchestra as you like,
|
||||
just ensure that your have one and only one `-M` instance per server, and that
|
||||
its name is unique, hence the recommendation for `-M main-$HOSTNAME`.
|
||||
|
||||
Now there are three strategies on how you can sync between the servers:
|
||||
* never: sounds weird, but this makes every server an island and has the chance
|
||||
that each follow different paths into the target. You can make this even more
|
||||
interesting by even giving different seeds to each server.
|
||||
* regularly (~4h): this ensures that all fuzzing campaigns on the servers "see"
|
||||
the same thing. It is like fuzzing on a huge server.
|
||||
* in intervals of 1/10th of the overall expected runtime of the fuzzing you
|
||||
sync. This tries a bit to combine both. Have some individuality of the paths
|
||||
each campaign on a server explores, on the other hand if one gets stuck where
|
||||
another found progress this is handed over making it unstuck.
|
||||
|
||||
The syncing process itself is very simple. As the `-M main-$HOSTNAME` instance
|
||||
syncs to all `-S` secondaries as well as to other fuzzers, you have to copy only
|
||||
this directory to the other machines.
|
||||
|
||||
Lets say all servers have the `-o out` directory in /target/foo/out, and you
|
||||
created a file `servers.txt` which contains the hostnames of all participating
|
||||
servers, plus you have an ssh key deployed to all of them, then run:
|
||||
|
||||
```bash
|
||||
for FROM in `cat servers.txt`; do
|
||||
for TO in `cat servers.txt`; do
|
||||
rsync -rlpogtz --rsh=ssh $FROM:/target/foo/out/main-$FROM $TO:target/foo/out/
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
You can run this manually, per cron job - as you need it. There is a more
|
||||
complex and configurable script in
|
||||
[utils/distributed_fuzzing](../utils/distributed_fuzzing).
|
||||
|
||||
### e) The status of the fuzz campaign
|
||||
|
||||
AFL++ comes with the `afl-whatsup` script to show the status of the fuzzing
|
||||
campaign.
|
||||
|
||||
Just supply the directory that afl-fuzz is given with the `-o` option and you
|
||||
will see a detailed status of every fuzzer in that campaign plus a summary.
|
||||
|
||||
To have only the summary, use the `-s` switch, e.g., `afl-whatsup -s out/`.
|
||||
|
||||
If you have multiple servers, then use the command after a sync or you have to
|
||||
execute this script per server.
|
||||
|
||||
Another tool to inspect the current state and history of a specific instance is
|
||||
afl-plot, which generates an index.html file and graphs that show how the
|
||||
fuzzing instance is performing. The syntax is `afl-plot instance_dir web_dir`,
|
||||
e.g., `afl-plot out/default /srv/www/htdocs/plot`.
|
||||
|
||||
### f) Stopping fuzzing, restarting fuzzing, adding new seeds
|
||||
|
||||
To stop an afl-fuzz run, press Control-C.
|
||||
|
||||
To restart an afl-fuzz run, just reuse the same command line but replace the `-i
|
||||
directory` with `-i -` or set `AFL_AUTORESUME=1`.
|
||||
|
||||
If you want to add new seeds to a fuzzing campaign, you can run a temporary
|
||||
fuzzing instance, e.g., when your main fuzzer is using `-o out` and the new
|
||||
seeds are in `newseeds/` directory:
|
||||
|
||||
```
|
||||
AFL_BENCH_JUST_ONE=1 AFL_FAST_CAL=1 afl-fuzz -i newseeds -o out -S newseeds -- ./target
|
||||
```
|
||||
|
||||
### g) Checking the coverage of the fuzzing
|
||||
|
||||
The `corpus count` value is a bad indicator for checking how good the coverage
|
||||
is.
|
||||
|
||||
A better indicator - if you use default llvm instrumentation with at least
|
||||
version 9 - is to use `afl-showmap` with the collect coverage option `-C` on the
|
||||
output directory:
|
||||
|
||||
```
|
||||
$ afl-showmap -C -i out -o /dev/null -- ./target -params @@
|
||||
...
|
||||
[*] Using SHARED MEMORY FUZZING feature.
|
||||
[*] Target map size: 9960
|
||||
[+] Processed 7849 input files.
|
||||
[+] Captured 4331 tuples (highest value 255, total values 67130596) in '/dev/nul
|
||||
l'.
|
||||
[+] A coverage of 4331 edges were achieved out of 9960 existing (43.48%) with 7849 input files.
|
||||
```
|
||||
|
||||
It is even better to check out the exact lines of code that have been reached -
|
||||
and which have not been found so far.
|
||||
|
||||
An "easy" helper script for this is
|
||||
[https://github.com/vanhauser-thc/afl-cov](https://github.com/vanhauser-thc/afl-cov),
|
||||
just follow the README of that separate project.
|
||||
|
||||
If you see that an important area or a feature has not been covered so far, then
|
||||
try to find an input that is able to reach that and start a new secondary in
|
||||
that fuzzing campaign with that seed as input, let it run for a few minutes,
|
||||
then terminate it. The main node will pick it up and make it available to the
|
||||
other secondary nodes over time. Set `export AFL_NO_AFFINITY=1` or `export
|
||||
AFL_TRY_AFFINITY=1` if you have no free core.
|
||||
|
||||
Note that in nearly all cases you can never reach full coverage. A lot of
|
||||
functionality is usually dependent on exclusive options that would need
|
||||
individual fuzzing campaigns each with one of these options set. E.g., if you
|
||||
fuzz a library to convert image formats and your target is the png to tiff API,
|
||||
then you will not touch any of the other library APIs and features.
|
||||
|
||||
### h) How long to fuzz a target?
|
||||
|
||||
This is a difficult question. Basically, if no new path is found for a long time
|
||||
(e.g., for a day or a week), then you can expect that your fuzzing won't be
|
||||
fruitful anymore. However, often this just means that you should switch out
|
||||
secondaries for others, e.g., custom mutator modules, sync to very different
|
||||
fuzzers, etc.
|
||||
|
||||
Keep the queue/ directory (for future fuzzings of the same or similar targets)
|
||||
and use them to seed other good fuzzers like libfuzzer with the -entropic switch
|
||||
or honggfuzz.
|
||||
|
||||
### i) Improve the speed!
|
||||
|
||||
* Use [persistent mode](../instrumentation/README.persistent_mode.md) (x2-x20
|
||||
speed increase).
|
||||
* If you do not use shmem persistent mode, use `AFL_TMPDIR` to point the input
|
||||
file on a tempfs location, see [env_variables.md](env_variables.md).
|
||||
* Linux: Improve kernel performance: modify `/etc/default/grub`, set
|
||||
`GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off
|
||||
mitigations=off no_stf_barrier noibpb noibrs nopcid nopti
|
||||
nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off
|
||||
spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then
|
||||
`update-grub` and `reboot` (warning: makes the system more insecure) - you can
|
||||
also just run `sudo afl-persistent-config`.
|
||||
* Linux: Running on an `ext2` filesystem with `noatime` mount option will be a
|
||||
bit faster than on any other journaling filesystem.
|
||||
* Use your cores! See [3c) Using multiple cores](#c-using-multiple-cores).
|
||||
* Run `sudo afl-system-config` before starting the first afl-fuzz instance after
|
||||
a reboot.
|
||||
|
||||
### j) Going beyond crashes
|
||||
|
||||
Fuzzing is a wonderful and underutilized technique for discovering non-crashing
|
||||
design and implementation errors, too. Quite a few interesting bugs have been
|
||||
found by modifying the target programs to call `abort()` when say:
|
||||
|
||||
- Two bignum libraries produce different outputs when given the same
|
||||
fuzzer-generated input.
|
||||
|
||||
- An image library produces different outputs when asked to decode the same
|
||||
input image several times in a row.
|
||||
|
||||
- A serialization/deserialization library fails to produce stable outputs when
|
||||
iteratively serializing and deserializing fuzzer-supplied data.
|
||||
|
||||
- A compression library produces an output inconsistent with the input file when
|
||||
asked to compress and then decompress a particular blob.
|
||||
|
||||
Implementing these or similar sanity checks usually takes very little time; if
|
||||
you are the maintainer of a particular package, you can make this code
|
||||
conditional with `#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION` (a flag also
|
||||
shared with libfuzzer and honggfuzz) or `#ifdef __AFL_COMPILER` (this one is
|
||||
just for AFL++).
|
||||
|
||||
### k) Known limitations & areas for improvement
|
||||
|
||||
Here are some of the most important caveats for AFL++:
|
||||
|
||||
- AFL++ detects faults by checking for the first spawned process dying due to a
|
||||
signal (SIGSEGV, SIGABRT, etc.). Programs that install custom handlers for
|
||||
these signals may need to have the relevant code commented out. In the same
|
||||
vein, faults in child processes spawned by the fuzzed target may evade
|
||||
detection unless you manually add some code to catch that.
|
||||
|
||||
- As with any other brute-force tool, the fuzzer offers limited coverage if
|
||||
encryption, checksums, cryptographic signatures, or compression are used to
|
||||
wholly wrap the actual data format to be tested.
|
||||
|
||||
To work around this, you can comment out the relevant checks (see
|
||||
utils/libpng_no_checksum/ for inspiration); if this is not possible, you can
|
||||
also write a postprocessor, one of the hooks of custom mutators. See
|
||||
[custom_mutators.md](custom_mutators.md) on how to use
|
||||
`AFL_CUSTOM_MUTATOR_LIBRARY`.
|
||||
|
||||
- There are some unfortunate trade-offs with ASAN and 64-bit binaries. This
|
||||
isn't due to any specific fault of afl-fuzz.
|
||||
|
||||
- There is no direct support for fuzzing network services, background daemons,
|
||||
or interactive apps that require UI interaction to work. You may need to make
|
||||
simple code changes to make them behave in a more traditional way. Preeny or
|
||||
libdesock may offer a relatively simple option, too - see:
|
||||
[https://github.com/zardus/preeny](https://github.com/zardus/preeny) or
|
||||
[https://github.com/fkie-cad/libdesock](https://github.com/fkie-cad/libdesock)
|
||||
|
||||
Some useful tips for modifying network-based services can be also found at:
|
||||
[https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop](https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop)
|
||||
|
||||
- Occasionally, sentient machines rise against their creators. If this happens
|
||||
to you, please consult
|
||||
[https://lcamtuf.coredump.cx/prep/](https://lcamtuf.coredump.cx/prep/).
|
||||
|
||||
Beyond this, see [INSTALL.md](INSTALL.md) for platform-specific tips.
|
||||
|
||||
## 4. Triaging crashes
|
||||
|
||||
The coverage-based grouping of crashes usually produces a small data set that
|
||||
can be quickly triaged manually or with a very simple GDB or Valgrind script.
|
||||
Every crash is also traceable to its parent non-crashing test case in the queue,
|
||||
making it easier to diagnose faults.
|
||||
|
||||
Having said that, it's important to acknowledge that some fuzzing crashes can be
|
||||
difficult to quickly evaluate for exploitability without a lot of debugging and
|
||||
code analysis work. To assist with this task, afl-fuzz supports a very unique
|
||||
"crash exploration" mode enabled with the `-C` flag.
|
||||
|
||||
In this mode, the fuzzer takes one or more crashing test cases as the input and
|
||||
uses its feedback-driven fuzzing strategies to very quickly enumerate all code
|
||||
paths that can be reached in the program while keeping it in the crashing state.
|
||||
|
||||
Mutations that do not result in a crash are rejected; so are any changes that do
|
||||
not affect the execution path.
|
||||
|
||||
The output is a small corpus of files that can be very rapidly examined to see
|
||||
what degree of control the attacker has over the faulting address, or whether it
|
||||
is possible to get past an initial out-of-bounds read - and see what lies
|
||||
beneath.
|
||||
|
||||
Oh, one more thing: for test case minimization, give afl-tmin a try. The tool
|
||||
can be operated in a very simple way:
|
||||
|
||||
```shell
|
||||
./afl-tmin -i test_case -o minimized_result -- /path/to/program [...]
|
||||
```
|
||||
|
||||
The tool works with crashing and non-crashing test cases alike. In the crash
|
||||
mode, it will happily accept instrumented and non-instrumented binaries. In the
|
||||
non-crashing mode, the minimizer relies on standard AFL++ instrumentation to
|
||||
make the file simpler without altering the execution path.
|
||||
|
||||
The minimizer accepts the `-m`, `-t`, `-f`, and `@@` syntax in a manner
|
||||
compatible with afl-fuzz.
|
||||
|
||||
Another tool in AFL++ is the afl-analyze tool. It takes an input file, attempts
|
||||
to sequentially flip bytes and observes the behavior of the tested program. It
|
||||
then color-codes the input based on which sections appear to be critical and
|
||||
which are not; while not bulletproof, it can often offer quick insights into
|
||||
complex file formats.
|
||||
|
||||
## 5. CI fuzzing
|
||||
|
||||
Some notes on continuous integration (CI) fuzzing - this fuzzing is different to
|
||||
normal fuzzing campaigns as these are much shorter runnings.
|
||||
|
||||
1. Always:
|
||||
* LTO has a much longer compile time which is diametrical to short fuzzing -
|
||||
hence use afl-clang-fast instead.
|
||||
* If you compile with CMPLOG, then you can save compilation time and reuse
|
||||
that compiled target with the `-c` option and as the main fuzz target.
|
||||
This will impact the speed by ~15% though.
|
||||
* `AFL_FAST_CAL` - enables fast calibration, this halves the time the
|
||||
saturated corpus needs to be loaded.
|
||||
* `AFL_CMPLOG_ONLY_NEW` - only perform cmplog on new finds, not the initial
|
||||
corpus as this very likely has been done for them already.
|
||||
* Keep the generated corpus, use afl-cmin and reuse it every time!
|
||||
|
||||
2. Additionally randomize the AFL++ compilation options, e.g.:
|
||||
* 30% for `AFL_LLVM_CMPLOG`
|
||||
* 5% for `AFL_LLVM_LAF_ALL`
|
||||
|
||||
3. Also randomize the afl-fuzz runtime options, e.g.:
|
||||
* 65% for `AFL_DISABLE_TRIM`
|
||||
* 50% for `AFL_KEEP_TIMEOUTS`
|
||||
* 50% use a dictionary generated by `AFL_LLVM_DICT2FILE`
|
||||
* 40% use MOpt (`-L 0`)
|
||||
* 40% for `AFL_EXPAND_HAVOC_NOW`
|
||||
* 20% for old queue processing (`-Z`)
|
||||
* for CMPLOG targets, 70% for `-l 2`, 10% for `-l 3`, 20% for `-l 2AT`
|
||||
|
||||
4. Do *not* run any `-M` modes, just running `-S` modes is better for CI
|
||||
fuzzing. `-M` enables old queue handling etc. which is good for a fuzzing
|
||||
campaign but not good for short CI runs.
|
||||
|
||||
How this can look like can, e.g., be seen at AFL++'s setup in Google's
|
||||
[oss-fuzz](https://github.com/google/oss-fuzz/blob/master/infra/base-images/base-builder/compile_afl)
|
||||
and
|
||||
[clusterfuzz](https://github.com/google/clusterfuzz/blob/master/src/clusterfuzz/_internal/bot/fuzzers/afl/launcher.py).
|
||||
|
||||
## The End
|
||||
|
||||
Check out the [FAQ](FAQ.md). Maybe it answers your question (that you might not
|
||||
even have known you had ;-) ).
|
||||
|
||||
This is basically all you need to know to professionally run fuzzing campaigns.
|
||||
If you want to know more, the tons of texts in [docs/](./) will have you
|
||||
covered.
|
||||
|
||||
Note that there are also a lot of tools out there that help fuzzing with AFL++
|
||||
(some might be deprecated or unsupported), see
|
||||
[third_party_tools.md](third_party_tools.md).
|
@ -1,58 +1,57 @@
|
||||
# Ideas for afl++
|
||||
# Ideas for AFL++
|
||||
|
||||
In the following, we describe a variety of ideas that could be implemented
|
||||
for future AFL++ versions.
|
||||
In the following, we describe a variety of ideas that could be implemented for
|
||||
future AFL++ versions.
|
||||
|
||||
## Analysis software
|
||||
|
||||
Currently analysis is done by using afl-plot, which is rather outdated.
|
||||
A GTK or browser tool to create run-time analysis based on fuzzer_stats,
|
||||
queue/id* information and plot_data that allows for zooming in and out,
|
||||
changing min/max display values etc. and doing that for a single run,
|
||||
different runs and campaigns vs campaigns.
|
||||
Interesting values are execs, and execs/s, edges discovered (total, when
|
||||
each edge was discovered and which other fuzzer share finding that edge),
|
||||
test cases executed.
|
||||
It should be clickable which value is X and Y axis, zoom factor, log scaling
|
||||
on-off, etc.
|
||||
Currently analysis is done by using afl-plot, which is rather outdated. A GTK or
|
||||
browser tool to create run-time analysis based on fuzzer_stats, queue/id*
|
||||
information and plot_data that allows for zooming in and out, changing min/max
|
||||
display values etc. and doing that for a single run, different runs and
|
||||
campaigns vs. campaigns. Interesting values are execs, and execs/s, edges
|
||||
discovered (total, when each edge was discovered and which other fuzzer share
|
||||
finding that edge), test cases executed. It should be clickable which value is X
|
||||
and Y axis, zoom factor, log scaling on-off, etc.
|
||||
|
||||
Mentor: vanhauser-thc
|
||||
|
||||
## WASM Instrumentation
|
||||
|
||||
Currently, AFL++ can be used for source code fuzzing and traditional binaries.
|
||||
With the rise of WASM as compile target, however, a novel way of
|
||||
With the rise of WASM as a compile target, however, a novel way of
|
||||
instrumentation needs to be implemented for binaries compiled to Webassembly.
|
||||
This can either be done by inserting instrumentation directly into the
|
||||
WASM AST, or by patching feedback into a WASM VMs of choice, similar to
|
||||
the current Unicorn instrumentation.
|
||||
This can either be done by inserting instrumentation directly into the WASM AST,
|
||||
or by patching feedback into a WASM VM of choice, similar to the current Unicorn
|
||||
instrumentation.
|
||||
|
||||
Mentor: any
|
||||
|
||||
## Support other programming languages
|
||||
|
||||
Other programming languages also use llvm hence they could (easily?) supported
|
||||
for fuzzing, e.g. mono, swift, go, kotlin native, fortran, ...
|
||||
Other programming languages also use llvm hence they could be (easily?)
|
||||
supported for fuzzing, e.g., mono, swift, go, kotlin native, fortran, ...
|
||||
|
||||
GCC also supports: Objective-C, Fortran, Ada, Go, and D
|
||||
(according to [Gcc homepage](https://gcc.gnu.org/))
|
||||
GCC also supports: Objective-C, Fortran, Ada, Go, and D (according to
|
||||
[Gcc homepage](https://gcc.gnu.org/))
|
||||
|
||||
LLVM is also used by: Rust, LLGo (Go), kaleidoscope (Haskell), flang (Fortran), emscripten (JavaScript, WASM), ilwasm (CIL (C#))
|
||||
(according to [LLVM frontends](https://gist.github.com/axic/62d66fb9d8bccca6cc48fa9841db9241))
|
||||
LLVM is also used by: Rust, LLGo (Go), kaleidoscope (Haskell), flang (Fortran),
|
||||
emscripten (JavaScript, WASM), ilwasm (CIL (C#)) (according to
|
||||
[LLVM frontends](https://gist.github.com/axic/62d66fb9d8bccca6cc48fa9841db9241))
|
||||
|
||||
Mentor: vanhauser-thc
|
||||
|
||||
## Machine Learning
|
||||
|
||||
Something with machine learning, better than [NEUZZ](https://github.com/dongdongshe/neuzz) :-)
|
||||
Either improve a single mutator thorugh learning of many different bugs
|
||||
(a bug class) or gather deep insights about a single target beforehand
|
||||
(CFG, DFG, VFG, ...?) and improve performance for a single target.
|
||||
Something with machine learning, better than
|
||||
[NEUZZ](https://github.com/dongdongshe/neuzz) :-) Either improve a single
|
||||
mutator through learning of many different bugs (a bug class) or gather deep
|
||||
insights about a single target beforehand (CFG, DFG, VFG, ...?) and improve
|
||||
performance for a single target.
|
||||
|
||||
Mentor: domenukk
|
||||
|
||||
## Your idea!
|
||||
|
||||
Finally, we are open to proposals!
|
||||
Create an issue at https://github.com/AFLplusplus/AFLplusplus/issues and let's discuss :-)
|
||||
|
||||
Finally, we are open to proposals! Create an issue at
|
||||
https://github.com/AFLplusplus/AFLplusplus/issues and let's discuss :-)
|
||||
|
60
docs/important_changes.md
Normal file
@ -0,0 +1,60 @@
|
||||
# Important changes in AFL++
|
||||
|
||||
This document lists important changes in AFL++, for example, major behavior
|
||||
changes.
|
||||
|
||||
## From version 3.00 onwards
|
||||
|
||||
With AFL++ 4.00, we introduced the following changes from previous behaviors:
|
||||
* the complete documentation was overhauled and restructured thanks to @llzmb!
|
||||
* a new CMPLOG target format requires recompiling CMPLOG targets for use with
|
||||
AFL++ 4.0 onwards
|
||||
* better naming for several fields in the UI
|
||||
|
||||
With AFL++ 3.15, we introduced the following changes from previous behaviors:
|
||||
* afl-cmin and afl-showmap `-Ci` now descend into subdirectories like afl-fuzz
|
||||
`-i` does (but note that afl-cmin.bash does not)
|
||||
|
||||
With AFL++ 3.14, we introduced the following changes from previous behaviors:
|
||||
* afl-fuzz: deterministic fuzzing is not a default for `-M main` anymore
|
||||
* afl-cmin/afl-showmap -i now descends into subdirectories (afl-cmin.bash,
|
||||
however, does not)
|
||||
|
||||
With AFL++ 3.10, we introduced the following changes from previous behaviors:
|
||||
* The '+' feature of the `-t` option now means to auto-calculate the timeout
|
||||
with the value given being the maximum timeout. The original meaning of
|
||||
"skipping timeouts instead of abort" is now inherent to the `-t` option.
|
||||
|
||||
With AFL++ 3.00, we introduced changes that break some previous AFL and AFL++
|
||||
behaviors and defaults:
|
||||
* There are no llvm_mode and gcc_plugin subdirectories anymore and there is
|
||||
only one compiler: afl-cc. All previous compilers now symlink to this one.
|
||||
All instrumentation source code is now in the `instrumentation/` folder.
|
||||
* The gcc_plugin was replaced with a new version submitted by AdaCore that
|
||||
supports more features. Thank you!
|
||||
* QEMU mode got upgraded to QEMU 5.1, but to be able to build this a current
|
||||
ninja build tool version and python3 setuptools are required. QEMU mode also
|
||||
got new options like snapshotting, instrumenting specific shared libraries,
|
||||
etc. Additionally QEMU 5.1 supports more CPU targets so this is really worth
|
||||
it.
|
||||
* When instrumenting targets, afl-cc will not supersede optimizations anymore
|
||||
if any were given. This allows to fuzz targets build regularly like those
|
||||
for debug or release versions.
|
||||
* afl-fuzz:
|
||||
* if neither `-M` or `-S` is specified, `-S default` is assumed, so more
|
||||
fuzzers can easily be added later
|
||||
* `-i` input directory option now descends into subdirectories. It also does
|
||||
not fail on crashes and too large files, instead it skips them and uses
|
||||
them for splicing mutations
|
||||
* `-m` none is now the default, set memory limits (in MB) with, e.g., `-m
|
||||
250`
|
||||
* deterministic fuzzing is now disabled by default (unless using `-M`) and
|
||||
can be enabled with `-D`
|
||||
* a caching of test cases can now be performed and can be modified by
|
||||
editing config.h for `TESTCASE_CACHE` or by specifying the environment
|
||||
variable `AFL_TESTCACHE_SIZE` (in MB). Good values are between 50-500
|
||||
(default: 50).
|
||||
* `-M` mains do not perform trimming
|
||||
* `examples/` got renamed to `utils/`
|
||||
* `libtokencap/`, `libdislocator/`, and `qdbi_mode/` were moved to `utils/`
|
||||
* afl-cmin/afl-cmin.bash now search first in `PATH` and last in `AFL_PATH`
|
@ -1,87 +0,0 @@
|
||||
# AFL "Life Pro Tips"
|
||||
|
||||
Bite-sized advice for those who understand the basics, but can't be bothered
|
||||
to read or memorize every other piece of documentation for AFL.
|
||||
|
||||
## Get more bang for your buck by using fuzzing dictionaries.
|
||||
|
||||
See [dictionaries/README.md](../dictionaries/README.md) to learn how.
|
||||
|
||||
## You can get the most out of your hardware by parallelizing AFL jobs.
|
||||
|
||||
See [parallel_fuzzing.md](parallel_fuzzing.md) for step-by-step tips.
|
||||
|
||||
## Improve the odds of spotting memory corruption bugs with libdislocator.so!
|
||||
|
||||
It's easy. Consult [utils/libdislocator/README.md](../utils/libdislocator/README.md) for usage tips.
|
||||
|
||||
## Want to understand how your target parses a particular input file?
|
||||
|
||||
Try the bundled `afl-analyze` tool; it's got colors and all!
|
||||
|
||||
## You can visually monitor the progress of your fuzzing jobs.
|
||||
|
||||
Run the bundled `afl-plot` utility to generate browser-friendly graphs.
|
||||
|
||||
## Need to monitor AFL jobs programmatically?
|
||||
Check out the `fuzzer_stats` file in the AFL output dir or try `afl-whatsup`.
|
||||
|
||||
## Puzzled by something showing up in red or purple in the AFL UI?
|
||||
It could be important - consult docs/status_screen.md right away!
|
||||
|
||||
## Know your target? Convert it to persistent mode for a huge performance gain!
|
||||
Consult section #5 in README.llvm.md for tips.
|
||||
|
||||
## Using clang?
|
||||
Check out instrumentation/ for a faster alternative to afl-gcc!
|
||||
|
||||
## Did you know that AFL can fuzz closed-source or cross-platform binaries?
|
||||
Check out qemu_mode/README.md and unicorn_mode/README.md for more.
|
||||
|
||||
## Did you know that afl-fuzz can minimize any test case for you?
|
||||
Try the bundled `afl-tmin` tool - and get small repro files fast!
|
||||
|
||||
## Not sure if a crash is exploitable? AFL can help you figure it out. Specify
|
||||
`-C` to enable the peruvian were-rabbit mode.
|
||||
|
||||
## Trouble dealing with a machine uprising? Relax, we've all been there.
|
||||
|
||||
Find essential survival tips at http://lcamtuf.coredump.cx/prep/.
|
||||
|
||||
## Want to automatically spot non-crashing memory handling bugs?
|
||||
|
||||
Try running an AFL-generated corpus through ASAN, MSAN, or Valgrind.
|
||||
|
||||
## Good selection of input files is critical to a successful fuzzing job.
|
||||
|
||||
See docs/perf_tips.md for pro tips.
|
||||
|
||||
## You can improve the odds of automatically spotting stack corruption issues.
|
||||
|
||||
Specify `AFL_HARDEN=1` in the environment to enable hardening flags.
|
||||
|
||||
## Bumping into problems with non-reproducible crashes?
|
||||
It happens, but usually
|
||||
isn't hard to diagnose. See section #7 in README.md for tips.
|
||||
|
||||
## Fuzzing is not just about memory corruption issues in the codebase.
|
||||
Add some
|
||||
sanity-checking `assert()` / `abort()` statements to effortlessly catch logic bugs.
|
||||
|
||||
## Hey kid... pssst... want to figure out how AFL really works?
|
||||
|
||||
Check out docs/technical_details.md for all the gory details in one place!
|
||||
|
||||
## There's a ton of third-party helper tools designed to work with AFL!
|
||||
|
||||
Be sure to check out docs/sister_projects.md before writing your own.
|
||||
|
||||
## Need to fuzz the command-line arguments of a particular program?
|
||||
|
||||
You can find a simple solution in utils/argv_fuzzing.
|
||||
|
||||
## Attacking a format that uses checksums?
|
||||
|
||||
Remove the checksum-checking code or use a postprocessor!
|
||||
See `afl_custom_post_process` in custom_mutators/examples/example.c for more.
|
||||
|
@ -1,259 +0,0 @@
|
||||
# Tips for parallel fuzzing
|
||||
|
||||
This document talks about synchronizing afl-fuzz jobs on a single machine
|
||||
or across a fleet of systems. See README.md for the general instruction manual.
|
||||
|
||||
Note that this document is rather outdated. please refer to the main document
|
||||
section on multiple core usage [../README.md#Using multiple cores](../README.md#b-using-multiple-coresthreads)
|
||||
for up to date strategies!
|
||||
|
||||
## 1) Introduction
|
||||
|
||||
Every copy of afl-fuzz will take up one CPU core. This means that on an
|
||||
n-core system, you can almost always run around n concurrent fuzzing jobs with
|
||||
virtually no performance hit (you can use the afl-gotcpu tool to make sure).
|
||||
|
||||
In fact, if you rely on just a single job on a multi-core system, you will
|
||||
be underutilizing the hardware. So, parallelization is always the right way to
|
||||
go.
|
||||
|
||||
When targeting multiple unrelated binaries or using the tool in
|
||||
"non-instrumented" (-n) mode, it is perfectly fine to just start up several
|
||||
fully separate instances of afl-fuzz. The picture gets more complicated when
|
||||
you want to have multiple fuzzers hammering a common target: if a hard-to-hit
|
||||
but interesting test case is synthesized by one fuzzer, the remaining instances
|
||||
will not be able to use that input to guide their work.
|
||||
|
||||
To help with this problem, afl-fuzz offers a simple way to synchronize test
|
||||
cases on the fly.
|
||||
|
||||
Note that afl++ has AFLfast's power schedules implemented.
|
||||
It is therefore a good idea to use different power schedules if you run
|
||||
several instances in parallel. See [power_schedules.md](power_schedules.md)
|
||||
|
||||
Alternatively running other AFL spinoffs in parallel can be of value,
|
||||
e.g. Angora (https://github.com/AngoraFuzzer/Angora/)
|
||||
|
||||
## 2) Single-system parallelization
|
||||
|
||||
If you wish to parallelize a single job across multiple cores on a local
|
||||
system, simply create a new, empty output directory ("sync dir") that will be
|
||||
shared by all the instances of afl-fuzz; and then come up with a naming scheme
|
||||
for every instance - say, "fuzzer01", "fuzzer02", etc.
|
||||
|
||||
Run the first one ("main node", -M) like this:
|
||||
|
||||
```
|
||||
./afl-fuzz -i testcase_dir -o sync_dir -M fuzzer01 [...other stuff...]
|
||||
```
|
||||
|
||||
...and then, start up secondary (-S) instances like this:
|
||||
|
||||
```
|
||||
./afl-fuzz -i testcase_dir -o sync_dir -S fuzzer02 [...other stuff...]
|
||||
./afl-fuzz -i testcase_dir -o sync_dir -S fuzzer03 [...other stuff...]
|
||||
```
|
||||
|
||||
Each fuzzer will keep its state in a separate subdirectory, like so:
|
||||
|
||||
/path/to/sync_dir/fuzzer01/
|
||||
|
||||
Each instance will also periodically rescan the top-level sync directory
|
||||
for any test cases found by other fuzzers - and will incorporate them into
|
||||
its own fuzzing when they are deemed interesting enough.
|
||||
For performance reasons only -M main node syncs the queue with everyone, the
|
||||
-S secondary nodes will only sync from the main node.
|
||||
|
||||
The difference between the -M and -S modes is that the main instance will
|
||||
still perform deterministic checks; while the secondary instances will
|
||||
proceed straight to random tweaks.
|
||||
|
||||
Note that you must always have one -M main instance!
|
||||
Running multiple -M instances is wasteful!
|
||||
|
||||
You can also monitor the progress of your jobs from the command line with the
|
||||
provided afl-whatsup tool. When the instances are no longer finding new paths,
|
||||
it's probably time to stop.
|
||||
|
||||
WARNING: Exercise caution when explicitly specifying the -f option. Each fuzzer
|
||||
must use a separate temporary file; otherwise, things will go south. One safe
|
||||
example may be:
|
||||
|
||||
```
|
||||
./afl-fuzz [...] -S fuzzer10 -f file10.txt ./fuzzed/binary @@
|
||||
./afl-fuzz [...] -S fuzzer11 -f file11.txt ./fuzzed/binary @@
|
||||
./afl-fuzz [...] -S fuzzer12 -f file12.txt ./fuzzed/binary @@
|
||||
```
|
||||
|
||||
This is not a concern if you use @@ without -f and let afl-fuzz come up with the
|
||||
file name.
|
||||
|
||||
## 3) Multiple -M mains
|
||||
|
||||
|
||||
There is support for parallelizing the deterministic checks.
|
||||
This is only needed where
|
||||
|
||||
1. many new paths are found fast over a long time and it looks unlikely that
|
||||
main node will ever catch up, and
|
||||
2. deterministic fuzzing is actively helping path discovery (you can see this
|
||||
in the main node for the first for lines in the "fuzzing strategy yields"
|
||||
section. If the ration `found/attemps` is high, then it is effective. It
|
||||
most commonly isn't.)
|
||||
|
||||
Only if both are true it is beneficial to have more than one main.
|
||||
You can leverage this by creating -M instances like so:
|
||||
|
||||
```
|
||||
./afl-fuzz -i testcase_dir -o sync_dir -M mainA:1/3 [...]
|
||||
./afl-fuzz -i testcase_dir -o sync_dir -M mainB:2/3 [...]
|
||||
./afl-fuzz -i testcase_dir -o sync_dir -M mainC:3/3 [...]
|
||||
```
|
||||
|
||||
... where the first value after ':' is the sequential ID of a particular main
|
||||
instance (starting at 1), and the second value is the total number of fuzzers to
|
||||
distribute the deterministic fuzzing across. Note that if you boot up fewer
|
||||
fuzzers than indicated by the second number passed to -M, you may end up with
|
||||
poor coverage.
|
||||
|
||||
## 4) Syncing with non-afl fuzzers or independant instances
|
||||
|
||||
A -M main node can be told with the `-F other_fuzzer_queue_directory` option
|
||||
to sync results from other fuzzers, e.g. libfuzzer or honggfuzz.
|
||||
|
||||
Only the specified directory will by synced into afl, not subdirectories.
|
||||
The specified directory does not need to exist yet at the start of afl.
|
||||
|
||||
The `-F` option can be passed to the main node several times.
|
||||
|
||||
## 5) Multi-system parallelization
|
||||
|
||||
The basic operating principle for multi-system parallelization is similar to
|
||||
the mechanism explained in section 2. The key difference is that you need to
|
||||
write a simple script that performs two actions:
|
||||
|
||||
- Uses SSH with authorized_keys to connect to every machine and retrieve
|
||||
a tar archive of the /path/to/sync_dir/<main_node(s)> directory local to
|
||||
the machine.
|
||||
It is best to use a naming scheme that includes host name and it's being
|
||||
a main node (e.g. main1, main2) in the fuzzer ID, so that you can do
|
||||
something like:
|
||||
|
||||
```sh
|
||||
for host in `cat HOSTLIST`; do
|
||||
ssh user@$host "tar -czf - sync/$host_main*/" > $host.tgz
|
||||
done
|
||||
```
|
||||
|
||||
- Distributes and unpacks these files on all the remaining machines, e.g.:
|
||||
|
||||
```sh
|
||||
for srchost in `cat HOSTLIST`; do
|
||||
for dsthost in `cat HOSTLIST`; do
|
||||
test "$srchost" = "$dsthost" && continue
|
||||
ssh user@$srchost 'tar -kxzf -' < $dsthost.tgz
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
There is an example of such a script in utils/distributed_fuzzing/.
|
||||
|
||||
There are other (older) more featured, experimental tools:
|
||||
* https://github.com/richo/roving
|
||||
* https://github.com/MartijnB/disfuzz-afl
|
||||
|
||||
However these do not support syncing just main nodes (yet).
|
||||
|
||||
When developing custom test case sync code, there are several optimizations
|
||||
to keep in mind:
|
||||
|
||||
- The synchronization does not have to happen very often; running the
|
||||
task every 60 minutes or even less often at later fuzzing stages is
|
||||
fine
|
||||
|
||||
- There is no need to synchronize crashes/ or hangs/; you only need to
|
||||
copy over queue/* (and ideally, also fuzzer_stats).
|
||||
|
||||
- It is not necessary (and not advisable!) to overwrite existing files;
|
||||
the -k option in tar is a good way to avoid that.
|
||||
|
||||
- There is no need to fetch directories for fuzzers that are not running
|
||||
locally on a particular machine, and were simply copied over onto that
|
||||
system during earlier runs.
|
||||
|
||||
- For large fleets, you will want to consolidate tarballs for each host,
|
||||
as this will let you use n SSH connections for sync, rather than n*(n-1).
|
||||
|
||||
You may also want to implement staged synchronization. For example, you
|
||||
could have 10 groups of systems, with group 1 pushing test cases only
|
||||
to group 2; group 2 pushing them only to group 3; and so on, with group
|
||||
eventually 10 feeding back to group 1.
|
||||
|
||||
This arrangement would allow test interesting cases to propagate across
|
||||
the fleet without having to copy every fuzzer queue to every single host.
|
||||
|
||||
- You do not want a "main" instance of afl-fuzz on every system; you should
|
||||
run them all with -S, and just designate a single process somewhere within
|
||||
the fleet to run with -M.
|
||||
|
||||
- Syncing is only necessary for the main nodes on a system. It is possible
|
||||
to run main-less with only secondaries. However then you need to find out
|
||||
which secondary took over the temporary role to be the main node. Look for
|
||||
the `is_main_node` file in the fuzzer directories, eg. `sync-dir/hostname-*/is_main_node`
|
||||
|
||||
It is *not* advisable to skip the synchronization script and run the fuzzers
|
||||
directly on a network filesystem; unexpected latency and unkillable processes
|
||||
in I/O wait state can mess things up.
|
||||
|
||||
## 6) Remote monitoring and data collection
|
||||
|
||||
You can use screen, nohup, tmux, or something equivalent to run remote
|
||||
instances of afl-fuzz. If you redirect the program's output to a file, it will
|
||||
automatically switch from a fancy UI to more limited status reports. There is
|
||||
also basic machine-readable information which is always written to the
|
||||
fuzzer_stats file in the output directory. Locally, that information can be
|
||||
interpreted with afl-whatsup.
|
||||
|
||||
In principle, you can use the status screen of the main (-M) instance to
|
||||
monitor the overall fuzzing progress and decide when to stop. In this
|
||||
mode, the most important signal is just that no new paths are being found
|
||||
for a longer while. If you do not have a main instance, just pick any
|
||||
single secondary instance to watch and go by that.
|
||||
|
||||
You can also rely on that instance's output directory to collect the
|
||||
synthesized corpus that covers all the noteworthy paths discovered anywhere
|
||||
within the fleet. Secondary (-S) instances do not require any special
|
||||
monitoring, other than just making sure that they are up.
|
||||
|
||||
Keep in mind that crashing inputs are *not* automatically propagated to the
|
||||
main instance, so you may still want to monitor for crashes fleet-wide
|
||||
from within your synchronization or health checking scripts (see afl-whatsup).
|
||||
|
||||
## 7) Asymmetric setups
|
||||
|
||||
It is perhaps worth noting that all of the following is permitted:
|
||||
|
||||
- Running afl-fuzz with conjunction with other guided tools that can extend
|
||||
coverage (e.g., via concolic execution). Third-party tools simply need to
|
||||
follow the protocol described above for pulling new test cases from
|
||||
out_dir/<fuzzer_id>/queue/* and writing their own finds to sequentially
|
||||
numbered id:nnnnnn files in out_dir/<ext_tool_id>/queue/*.
|
||||
|
||||
- Running some of the synchronized fuzzers with different (but related)
|
||||
target binaries. For example, simultaneously stress-testing several
|
||||
different JPEG parsers (say, IJG jpeg and libjpeg-turbo) while sharing
|
||||
the discovered test cases can have synergistic effects and improve the
|
||||
overall coverage.
|
||||
|
||||
(In this case, running one -M instance per target is necessary.)
|
||||
|
||||
- Having some of the fuzzers invoke the binary in different ways.
|
||||
For example, 'djpeg' supports several DCT modes, configurable with
|
||||
a command-line flag, while 'dwebp' supports incremental and one-shot
|
||||
decoding. In some scenarios, going after multiple distinct modes and then
|
||||
pooling test cases will improve coverage.
|
||||
|
||||
- Much less convincingly, running the synchronized fuzzers with different
|
||||
starting test cases (e.g., progressive and standard JPEG) or dictionaries.
|
||||
The synchronization mechanism ensures that the test sets will get fairly
|
||||
homogeneous over time, but it introduces some initial variability.
|
@ -1,208 +0,0 @@
|
||||
## Tips for performance optimization
|
||||
|
||||
This file provides tips for troubleshooting slow or wasteful fuzzing jobs.
|
||||
See README.md for the general instruction manual.
|
||||
|
||||
## 1. Keep your test cases small
|
||||
|
||||
This is probably the single most important step to take! Large test cases do
|
||||
not merely take more time and memory to be parsed by the tested binary, but
|
||||
also make the fuzzing process dramatically less efficient in several other
|
||||
ways.
|
||||
|
||||
To illustrate, let's say that you're randomly flipping bits in a file, one bit
|
||||
at a time. Let's assume that if you flip bit #47, you will hit a security bug;
|
||||
flipping any other bit just results in an invalid document.
|
||||
|
||||
Now, if your starting test case is 100 bytes long, you will have a 71% chance of
|
||||
triggering the bug within the first 1,000 execs - not bad! But if the test case
|
||||
is 1 kB long, the probability that we will randomly hit the right pattern in
|
||||
the same timeframe goes down to 11%. And if it has 10 kB of non-essential
|
||||
cruft, the odds plunge to 1%.
|
||||
|
||||
On top of that, with larger inputs, the binary may be now running 5-10x times
|
||||
slower than before - so the overall drop in fuzzing efficiency may be easily
|
||||
as high as 500x or so.
|
||||
|
||||
In practice, this means that you shouldn't fuzz image parsers with your
|
||||
vacation photos. Generate a tiny 16x16 picture instead, and run it through
|
||||
`jpegtran` or `pngcrunch` for good measure. The same goes for most other types
|
||||
of documents.
|
||||
|
||||
There's plenty of small starting test cases in ../testcases/ - try them out
|
||||
or submit new ones!
|
||||
|
||||
If you want to start with a larger, third-party corpus, run `afl-cmin` with an
|
||||
aggressive timeout on that data set first.
|
||||
|
||||
## 2. Use a simpler target
|
||||
|
||||
Consider using a simpler target binary in your fuzzing work. For example, for
|
||||
image formats, bundled utilities such as `djpeg`, `readpng`, or `gifhisto` are
|
||||
considerably (10-20x) faster than the convert tool from ImageMagick - all while exercising roughly the same library-level image parsing code.
|
||||
|
||||
Even if you don't have a lightweight harness for a particular target, remember
|
||||
that you can always use another, related library to generate a corpus that will
|
||||
be then manually fed to a more resource-hungry program later on.
|
||||
|
||||
Also note that reading the fuzzing input via stdin is faster than reading from
|
||||
a file.
|
||||
|
||||
## 3. Use LLVM persistent instrumentation
|
||||
|
||||
The LLVM mode offers a "persistent", in-process fuzzing mode that can
|
||||
work well for certain types of self-contained libraries, and for fast targets,
|
||||
can offer performance gains up to 5-10x; and a "deferred fork server" mode
|
||||
that can offer huge benefits for programs with high startup overhead. Both
|
||||
modes require you to edit the source code of the fuzzed program, but the
|
||||
changes often amount to just strategically placing a single line or two.
|
||||
|
||||
If there are important data comparisons performed (e.g. `strcmp(ptr, MAGIC_HDR)`)
|
||||
then using laf-intel (see instrumentation/README.laf-intel.md) will help `afl-fuzz` a lot
|
||||
to get to the important parts in the code.
|
||||
|
||||
If you are only interested in specific parts of the code being fuzzed, you can
|
||||
instrument_files the files that are actually relevant. This improves the speed and
|
||||
accuracy of afl. See instrumentation/README.instrument_list.md
|
||||
|
||||
## 4. Profile and optimize the binary
|
||||
|
||||
Check for any parameters or settings that obviously improve performance. For
|
||||
example, the djpeg utility that comes with IJG jpeg and libjpeg-turbo can be
|
||||
called with:
|
||||
|
||||
```bash
|
||||
-dct fast -nosmooth -onepass -dither none -scale 1/4
|
||||
```
|
||||
|
||||
...and that will speed things up. There is a corresponding drop in the quality
|
||||
of decoded images, but it's probably not something you care about.
|
||||
|
||||
In some programs, it is possible to disable output altogether, or at least use
|
||||
an output format that is computationally inexpensive. For example, with image
|
||||
transcoding tools, converting to a BMP file will be a lot faster than to PNG.
|
||||
|
||||
With some laid-back parsers, enabling "strict" mode (i.e., bailing out after
|
||||
first error) may result in smaller files and improved run time without
|
||||
sacrificing coverage; for example, for sqlite, you may want to specify -bail.
|
||||
|
||||
If the program is still too slow, you can use `strace -tt` or an equivalent
|
||||
profiling tool to see if the targeted binary is doing anything silly.
|
||||
Sometimes, you can speed things up simply by specifying `/dev/null` as the
|
||||
config file, or disabling some compile-time features that aren't really needed
|
||||
for the job (try `./configure --help`). One of the notoriously resource-consuming
|
||||
things would be calling other utilities via `exec*()`, `popen()`, `system()`, or
|
||||
equivalent calls; for example, tar can invoke external decompression tools
|
||||
when it decides that the input file is a compressed archive.
|
||||
|
||||
Some programs may also intentionally call `sleep()`, `usleep()`, or `nanosleep()`;
|
||||
vim is a good example of that. Other programs may attempt `fsync()` and so on.
|
||||
There are third-party libraries that make it easy to get rid of such code,
|
||||
e.g.:
|
||||
|
||||
https://launchpad.net/libeatmydata
|
||||
|
||||
In programs that are slow due to unavoidable initialization overhead, you may
|
||||
want to try the LLVM deferred forkserver mode (see README.llvm.md),
|
||||
which can give you speed gains up to 10x, as mentioned above.
|
||||
|
||||
Last but not least, if you are using ASAN and the performance is unacceptable,
|
||||
consider turning it off for now, and manually examining the generated corpus
|
||||
with an ASAN-enabled binary later on.
|
||||
|
||||
## 5. Instrument just what you need
|
||||
|
||||
Instrument just the libraries you actually want to stress-test right now, one
|
||||
at a time. Let the program use system-wide, non-instrumented libraries for
|
||||
any functionality you don't actually want to fuzz. For example, in most
|
||||
cases, it doesn't make to instrument `libgmp` just because you're testing a
|
||||
crypto app that relies on it for bignum math.
|
||||
|
||||
Beware of programs that come with oddball third-party libraries bundled with
|
||||
their source code (Spidermonkey is a good example of this). Check `./configure`
|
||||
options to use non-instrumented system-wide copies instead.
|
||||
|
||||
## 6. Parallelize your fuzzers
|
||||
|
||||
The fuzzer is designed to need ~1 core per job. This means that on a, say,
|
||||
4-core system, you can easily run four parallel fuzzing jobs with relatively
|
||||
little performance hit. For tips on how to do that, see parallel_fuzzing.md.
|
||||
|
||||
The `afl-gotcpu` utility can help you understand if you still have idle CPU
|
||||
capacity on your system. (It won't tell you about memory bandwidth, cache
|
||||
misses, or similar factors, but they are less likely to be a concern.)
|
||||
|
||||
## 7. Keep memory use and timeouts in check
|
||||
|
||||
Consider setting low values for `-m` and `-t`.
|
||||
|
||||
For programs that are nominally very fast, but get sluggish for some inputs,
|
||||
you can also try setting `-t` values that are more punishing than what `afl-fuzz`
|
||||
dares to use on its own. On fast and idle machines, going down to `-t 5` may be
|
||||
a viable plan.
|
||||
|
||||
The `-m` parameter is worth looking at, too. Some programs can end up spending
|
||||
a fair amount of time allocating and initializing megabytes of memory when
|
||||
presented with pathological inputs. Low `-m` values can make them give up sooner
|
||||
and not waste CPU time.
|
||||
|
||||
## 8. Check OS configuration
|
||||
|
||||
There are several OS-level factors that may affect fuzzing speed:
|
||||
|
||||
- If you have no risk of power loss then run your fuzzing on a tmpfs
|
||||
partition. This increases the performance noticably.
|
||||
Alternatively you can use `AFL_TMPDIR` to point to a tmpfs location to
|
||||
just write the input file to a tmpfs.
|
||||
- High system load. Use idle machines where possible. Kill any non-essential
|
||||
CPU hogs (idle browser windows, media players, complex screensavers, etc).
|
||||
- Network filesystems, either used for fuzzer input / output, or accessed by
|
||||
the fuzzed binary to read configuration files (pay special attention to the
|
||||
home directory - many programs search it for dot-files).
|
||||
- Disable all the spectre, meltdown etc. security countermeasures in the
|
||||
kernel if your machine is properly separated:
|
||||
|
||||
```
|
||||
ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off
|
||||
no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable
|
||||
nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off
|
||||
spectre_v2=off stf_barrier=off
|
||||
```
|
||||
In most Linux distributions you can put this into a `/etc/default/grub`
|
||||
variable.
|
||||
|
||||
The following list of changes are made when executing `afl-system-config`:
|
||||
|
||||
- On-demand CPU scaling. The Linux `ondemand` governor performs its analysis
|
||||
on a particular schedule and is known to underestimate the needs of
|
||||
short-lived processes spawned by `afl-fuzz` (or any other fuzzer). On Linux,
|
||||
this can be fixed with:
|
||||
|
||||
``` bash
|
||||
cd /sys/devices/system/cpu
|
||||
echo performance | tee cpu*/cpufreq/scaling_governor
|
||||
```
|
||||
|
||||
On other systems, the impact of CPU scaling will be different; when fuzzing,
|
||||
use OS-specific tools to find out if all cores are running at full speed.
|
||||
- Transparent huge pages. Some allocators, such as `jemalloc`, can incur a
|
||||
heavy fuzzing penalty when transparent huge pages (THP) are enabled in the
|
||||
kernel. You can disable this via:
|
||||
|
||||
```bash
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
```
|
||||
|
||||
- Suboptimal scheduling strategies. The significance of this will vary from
|
||||
one target to another, but on Linux, you may want to make sure that the
|
||||
following options are set:
|
||||
|
||||
```bash
|
||||
echo 1 >/proc/sys/kernel/sched_child_runs_first
|
||||
echo 1 >/proc/sys/kernel/sched_autogroup_enabled
|
||||
```
|
||||
|
||||
Setting a different scheduling policy for the fuzzer process - say
|
||||
`SCHED_RR` - can usually speed things up, too, but needs to be done with
|
||||
care.
|
||||
|
4
docs/resources/0_fuzzing_process_overview.drawio.svg
Normal file
After Width: | Height: | Size: 32 KiB |
4
docs/resources/1_instrument_target.drawio.svg
Normal file
After Width: | Height: | Size: 14 KiB |
4
docs/resources/2_prepare_campaign.drawio.svg
Normal file
After Width: | Height: | Size: 10 KiB |
4
docs/resources/3_fuzz_target.drawio.svg
Normal file
After Width: | Height: | Size: 11 KiB |
4
docs/resources/4_manage_campaign.drawio.svg
Normal file
After Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 581 KiB After Width: | Height: | Size: 581 KiB |
@ -424,7 +424,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "fuzzing{type=\"unique_crashes\"}",
|
||||
"expr": "fuzzing{type=\"saved_crashes\"}",
|
||||
"interval": "",
|
||||
"legendFormat": "",
|
||||
"refId": "A"
|
||||
@ -519,7 +519,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "fuzzing{type=\"unique_hangs\"}",
|
||||
"expr": "fuzzing{type=\"saved_hangs\"}",
|
||||
"interval": "",
|
||||
"legendFormat": "",
|
||||
"refId": "A"
|
||||
@ -926,7 +926,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "fuzzing{type=\"cur_path\"}",
|
||||
"expr": "fuzzing{type=\"cur_item\"}",
|
||||
"interval": "",
|
||||
"legendFormat": "",
|
||||
"refId": "A"
|
||||
@ -936,7 +936,7 @@
|
||||
"timeFrom": null,
|
||||
"timeRegions": [],
|
||||
"timeShift": null,
|
||||
"title": "Curent path",
|
||||
"title": "Current fuzz item",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
@ -1116,7 +1116,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "fuzzing{type=\"paths_favored\"}",
|
||||
"expr": "fuzzing{type=\"corpus_favored\"}",
|
||||
"interval": "",
|
||||
"legendFormat": "",
|
||||
"refId": "A"
|
||||
@ -1135,7 +1135,7 @@
|
||||
}
|
||||
],
|
||||
"timeShift": null,
|
||||
"title": "Path Favored",
|
||||
"title": "Corpus Favored",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
@ -1428,7 +1428,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "fuzzing{type=\"paths_imported\"}",
|
||||
"expr": "fuzzing{type=\"corpus_imported\"}",
|
||||
"interval": "",
|
||||
"legendFormat": "",
|
||||
"refId": "A"
|
||||
@ -1447,7 +1447,7 @@
|
||||
}
|
||||
],
|
||||
"timeShift": null,
|
||||
"title": "Path Imported",
|
||||
"title": "Corpus Imported",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
BIN
docs/resources/screenshot.png
Normal file
After Width: | Height: | Size: 141 KiB |
Before Width: | Height: | Size: 160 KiB After Width: | Height: | Size: 160 KiB |
@ -1,143 +1,190 @@
|
||||
# Remote monitoring with StatsD
|
||||
# Remote monitoring and metrics visualization
|
||||
|
||||
StatsD allows you to receive and aggregate metrics from a wide range of applications and retransmit them to the backend of your choice.
|
||||
This enables you to create nice and readable dashboards containing all the information you need on your fuzzer instances.
|
||||
No need to write your own statistics parsing system, deploy and maintain it to all your instances, sync with your graph rendering system...
|
||||
AFL++ can send out metrics as StatsD messages. For remote monitoring and
|
||||
visualization of the metrics, you can set up a tool chain. For example, with
|
||||
Prometheus and Grafana. All tools are free and open source.
|
||||
|
||||
The available metrics are :
|
||||
This enables you to create nice and readable dashboards containing all the
|
||||
information you need on your fuzzer instances. There is no need to write your
|
||||
own statistics parsing system, deploy and maintain it to all your instances, and
|
||||
sync with your graph rendering system.
|
||||
|
||||
Compared to the default integrated UI of AFL++, this can help you to visualize
|
||||
trends and the fuzzing state over time. You might be able to see when the
|
||||
fuzzing process has reached a state of no progress and visualize what are the
|
||||
"best strategies" for your targets (according to your own criteria). You can do
|
||||
so without logging into each instance individually.
|
||||
|
||||

|
||||
|
||||
This is an example visualization with Grafana. The dashboard can be imported
|
||||
with [this JSON template](resources/grafana-afl++.json).
|
||||
|
||||
## AFL++ metrics and StatsD
|
||||
|
||||
StatsD allows you to receive and aggregate metrics from a wide range of
|
||||
applications and retransmit them to a backend of your choice.
|
||||
|
||||
From AFL++, StatsD can receive the following metrics:
|
||||
- cur_item
|
||||
- cycle_done
|
||||
- cycles_wo_finds
|
||||
- edges_found
|
||||
- execs_done
|
||||
- execs_per_sec
|
||||
- paths_total
|
||||
- paths_favored
|
||||
- paths_found
|
||||
- paths_imported
|
||||
- havoc_expansion
|
||||
- max_depth
|
||||
- cur_path
|
||||
- corpus_favored
|
||||
- corpus_found
|
||||
- corpus_imported
|
||||
- corpus_count
|
||||
- pending_favs
|
||||
- pending_total
|
||||
- variable_paths
|
||||
- unique_crashes
|
||||
- unique_hangs
|
||||
- total_crashes
|
||||
- slowest_exec_ms
|
||||
- edges_found
|
||||
- total_crashes
|
||||
- saved_crashes
|
||||
- saved_hangs
|
||||
- var_byte_count
|
||||
- havoc_expansion
|
||||
- corpus_variable
|
||||
|
||||
Compared to the default integrated UI, these metrics give you the opportunity to visualize trends and fuzzing state over time.
|
||||
By doing so, you might be able to see when the fuzzing process has reached a state of no progress, visualize what are the "best strategies"
|
||||
(according to your own criteria) for your targets, etc. And doing so without requiring to log into each instance manually.
|
||||
Depending on your StatsD server, you will be able to monitor, trigger alerts, or
|
||||
perform actions based on these metrics (for example: alert on slow exec/s for a
|
||||
new build, threshold of crashes, time since last crash > X, and so on).
|
||||
|
||||
An example visualisation may look like the following:
|
||||

|
||||
## Setting environment variables in AFL++
|
||||
|
||||
*Notes: The exact same dashboard can be imported with [this JSON template](statsd/grafana-afl++.json).*
|
||||
1. To enable the StatsD metrics collection on your fuzzer instances, set the
|
||||
environment variable `AFL_STATSD=1`. By default, AFL++ will send the metrics
|
||||
over UDP to 127.0.0.1:8125.
|
||||
|
||||
## How to use
|
||||
2. To enable tags for each metric based on their format (banner and
|
||||
afl_version), set the environment variable `AFL_STATSD_TAGS_FLAVOR`. By
|
||||
default, no tags will be added to the metrics.
|
||||
|
||||
To enable the StatsD reporting on your fuzzer instances, you need to set the environment variable `AFL_STATSD=1`.
|
||||
The available values are the following:
|
||||
- `dogstatsd`
|
||||
- `influxdb`
|
||||
- `librato`
|
||||
- `signalfx`
|
||||
|
||||
Setting `AFL_STATSD_TAGS_FLAVOR` to the provider of your choice will assign tags / labels to each metric based on their format.
|
||||
The possible values are `dogstatsd`, `librato`, `signalfx` or `influxdb`.
|
||||
For more information on these env vars, check out `docs/env_variables.md`.
|
||||
For more information on environment variables, see
|
||||
[env_variables.md](env_variables.md).
|
||||
|
||||
The simplest way of using this feature is to use any metric provider and change the host/port of your StatsD daemon,
|
||||
with `AFL_STATSD_HOST` and `AFL_STATSD_PORT`, if required (defaults are `localhost` and port `8125`).
|
||||
To get started, here are some instructions with free and open source tools.
|
||||
The following setup is based on Prometheus, statsd_exporter and Grafana.
|
||||
Grafana here is not mandatory, but gives you some nice graphs and features.
|
||||
Note: When using multiple fuzzer instances with StatsD it is *strongly*
|
||||
recommended to set up `AFL_STATSD_TAGS_FLAVOR` to match your StatsD server.
|
||||
This will allow you to see individual fuzzer performance, detect bad ones,
|
||||
and see the progress of each strategy.
|
||||
|
||||
Depending on your setup and infrastructure, you may want to run these applications not on your fuzzer instances.
|
||||
Only one instance of these 3 application is required for all your fuzzers.
|
||||
3. Optional: To set the host and port of your StatsD daemon, set
|
||||
`AFL_STATSD_HOST` and `AFL_STATSD_PORT`. The default values are `localhost`
|
||||
and `8125`.
|
||||
|
||||
To simplify everything, we will use Docker and docker-compose.
|
||||
Make sure you have them both installed. On most common Linux distributions, it's as simple as:
|
||||
## Installing and setting up StatsD, Prometheus, and Grafana
|
||||
|
||||
```sh
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sh get-docker.sh
|
||||
```
|
||||
The easiest way to install and set up the infrastructure is with Docker and
|
||||
Docker Compose.
|
||||
|
||||
Once that's done, we can create the infrastructure.
|
||||
Create and move into the directory of your choice. This will store all the configurations files required.
|
||||
Depending on your fuzzing setup and infrastructure, you may not want to run
|
||||
these applications on your fuzzer instances. This setup may be modified before
|
||||
use in a production environment; for example, adding passwords, creating volumes
|
||||
for storage, tweaking the metrics gathering to get host metrics (CPU, RAM, and
|
||||
so on).
|
||||
|
||||
First, create a `docker-compose.yml` containing the following:
|
||||
```yml
|
||||
version: '3'
|
||||
For all your fuzzing instances, only one instance of Prometheus and Grafana is
|
||||
required. The
|
||||
[statsd exporter](https://registry.hub.docker.com/r/prom/statsd-exporter)
|
||||
converts the StatsD metrics to Prometheus. If you are using a provider that
|
||||
supports StatsD directly, you can skip this part of the setup."
|
||||
|
||||
networks:
|
||||
statsd-net:
|
||||
driver: bridge
|
||||
You can create and move the infrastructure files into a directory of your
|
||||
choice. The directory will store all the required configuration files.
|
||||
|
||||
To install and set up Prometheus and Grafana:
|
||||
|
||||
1. Install Docker and Docker Compose:
|
||||
|
||||
```sh
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sh get-docker.sh
|
||||
```
|
||||
|
||||
2. Create a `docker-compose.yml` containing the following:
|
||||
|
||||
```yml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus
|
||||
container_name: prometheus
|
||||
volumes:
|
||||
- ./prometheus.yml:/prometheus.yml
|
||||
command:
|
||||
- '--config.file=/prometheus.yml'
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "9090:9090"
|
||||
networks:
|
||||
- statsd-net
|
||||
statsd-net:
|
||||
driver: bridge
|
||||
|
||||
statsd_exporter:
|
||||
image: prom/statsd-exporter
|
||||
container_name: statsd_exporter
|
||||
volumes:
|
||||
- ./statsd_mapping.yml:/statsd_mapping.yml
|
||||
command:
|
||||
- "--statsd.mapping-config=/statsd_mapping.yml"
|
||||
ports:
|
||||
- "9102:9102/tcp"
|
||||
- "8125:9125/udp"
|
||||
networks:
|
||||
- statsd-net
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana
|
||||
container_name: grafana
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3000:3000"
|
||||
networks:
|
||||
- statsd-net
|
||||
```
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus
|
||||
container_name: prometheus
|
||||
volumes:
|
||||
- ./prometheus.yml:/prometheus.yml
|
||||
command:
|
||||
- '--config.file=/prometheus.yml'
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "9090:9090"
|
||||
networks:
|
||||
- statsd-net
|
||||
|
||||
Then `prometheus.yml`
|
||||
```yml
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
statsd_exporter:
|
||||
image: prom/statsd-exporter
|
||||
container_name: statsd_exporter
|
||||
volumes:
|
||||
- ./statsd_mapping.yml:/statsd_mapping.yml
|
||||
command:
|
||||
- "--statsd.mapping-config=/statsd_mapping.yml"
|
||||
ports:
|
||||
- "9102:9102/tcp"
|
||||
- "8125:9125/udp"
|
||||
networks:
|
||||
- statsd-net
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'fuzzing_metrics'
|
||||
static_configs:
|
||||
- targets: ['statsd_exporter:9102']
|
||||
```
|
||||
grafana:
|
||||
image: grafana/grafana
|
||||
container_name: grafana
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3000:3000"
|
||||
networks:
|
||||
- statsd-net
|
||||
```
|
||||
|
||||
And finally `statsd_mapping.yml`
|
||||
```yml
|
||||
mappings:
|
||||
- match: "fuzzing.*"
|
||||
name: "fuzzing"
|
||||
labels:
|
||||
type: "$1"
|
||||
```
|
||||
3. Create a `prometheus.yml` containing the following:
|
||||
|
||||
Run `docker-compose up -d`.
|
||||
```yml
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
Everything should now be setup, you are now able to run your fuzzers with
|
||||
scrape_configs:
|
||||
- job_name: 'fuzzing_metrics'
|
||||
static_configs:
|
||||
- targets: ['statsd_exporter:9102']
|
||||
```
|
||||
|
||||
4. Create a `statsd_mapping.yml` containing the following:
|
||||
|
||||
```yml
|
||||
mappings:
|
||||
- match: "fuzzing.*"
|
||||
name: "fuzzing"
|
||||
labels:
|
||||
type: "$1"
|
||||
```
|
||||
|
||||
5. Run `docker-compose up -d`.
|
||||
|
||||
## Running AFL++ with StatsD
|
||||
|
||||
To run your fuzzing instances:
|
||||
|
||||
```
|
||||
AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -M test-fuzzer-1 -i i -o o ./bin/my-application @@
|
||||
AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -S test-fuzzer-2 -i i -o o ./bin/my-application @@
|
||||
AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -M test-fuzzer-1 -i i -o o [./bin/my-application] @@
|
||||
AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -S test-fuzzer-2 -i i -o o [./bin/my-application] @@
|
||||
...
|
||||
```
|
||||
|
||||
This setup may be modified before use in a production environment. Depending on your needs: adding passwords, creating volumes for storage,
|
||||
tweaking the metrics gathering to get host metrics (CPU, RAM ...).
|
||||
```
|
Before Width: | Height: | Size: 114 KiB |
@ -1,319 +0,0 @@
|
||||
# Sister projects
|
||||
|
||||
This doc lists some of the projects that are inspired by, derived from,
|
||||
designed for, or meant to integrate with AFL. See README.md for the general
|
||||
instruction manual.
|
||||
|
||||
!!!
|
||||
!!! This list is outdated and needs an update, missing: e.g. Angora, FairFuzz
|
||||
!!!
|
||||
|
||||
## Support for other languages / environments:
|
||||
|
||||
### Python AFL (Jakub Wilk)
|
||||
|
||||
Allows fuzz-testing of Python programs. Uses custom instrumentation and its
|
||||
own forkserver.
|
||||
|
||||
http://jwilk.net/software/python-afl
|
||||
|
||||
### Go-fuzz (Dmitry Vyukov)
|
||||
|
||||
AFL-inspired guided fuzzing approach for Go targets:
|
||||
|
||||
https://github.com/dvyukov/go-fuzz
|
||||
|
||||
### afl.rs (Keegan McAllister)
|
||||
|
||||
Allows Rust features to be easily fuzzed with AFL (using the LLVM mode).
|
||||
|
||||
https://github.com/kmcallister/afl.rs
|
||||
|
||||
### OCaml support (KC Sivaramakrishnan)
|
||||
|
||||
Adds AFL-compatible instrumentation to OCaml programs.
|
||||
|
||||
https://github.com/ocamllabs/opam-repo-dev/pull/23
|
||||
http://canopy.mirage.io/Posts/Fuzzing
|
||||
|
||||
### AFL for GCJ Java and other GCC frontends (-)
|
||||
|
||||
GCC Java programs are actually supported out of the box - simply rename
|
||||
afl-gcc to afl-gcj. Unfortunately, by default, unhandled exceptions in GCJ do
|
||||
not result in abort() being called, so you will need to manually add a
|
||||
top-level exception handler that exits with SIGABRT or something equivalent.
|
||||
|
||||
Other GCC-supported languages should be fairly easy to get working, but may
|
||||
face similar problems. See https://gcc.gnu.org/frontends.html for a list of
|
||||
options.
|
||||
|
||||
## AFL-style in-process fuzzer for LLVM (Kostya Serebryany)
|
||||
|
||||
Provides an evolutionary instrumentation-guided fuzzing harness that allows
|
||||
some programs to be fuzzed without the fork / execve overhead. (Similar
|
||||
functionality is now available as the "persistent" feature described in
|
||||
[the llvm_mode readme](../instrumentation/README.llvm.md))
|
||||
|
||||
http://llvm.org/docs/LibFuzzer.html
|
||||
|
||||
## TriforceAFL (Tim Newsham and Jesse Hertz)
|
||||
|
||||
Leverages QEMU full system emulation mode to allow AFL to target operating
|
||||
systems and other alien worlds:
|
||||
|
||||
https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2016/june/project-triforce-run-afl-on-everything/
|
||||
|
||||
## WinAFL (Ivan Fratric)
|
||||
|
||||
As the name implies, allows you to fuzz Windows binaries (using DynamoRio).
|
||||
|
||||
https://github.com/ivanfratric/winafl
|
||||
|
||||
Another Windows alternative may be:
|
||||
|
||||
https://github.com/carlosgprado/BrundleFuzz/
|
||||
|
||||
## Network fuzzing
|
||||
|
||||
### Preeny (Yan Shoshitaishvili)
|
||||
|
||||
Provides a fairly simple way to convince dynamically linked network-centric
|
||||
programs to read from a file or not fork. Not AFL-specific, but described as
|
||||
useful by many users. Some assembly required.
|
||||
|
||||
https://github.com/zardus/preeny
|
||||
|
||||
## Distributed fuzzing and related automation
|
||||
|
||||
### roving (Richo Healey)
|
||||
|
||||
A client-server architecture for effortlessly orchestrating AFL runs across
|
||||
a fleet of machines. You don't want to use this on systems that face the
|
||||
Internet or live in other untrusted environments.
|
||||
|
||||
https://github.com/richo/roving
|
||||
|
||||
### Distfuzz-AFL (Martijn Bogaard)
|
||||
|
||||
Simplifies the management of afl-fuzz instances on remote machines. The
|
||||
author notes that the current implementation isn't secure and should not
|
||||
be exposed on the Internet.
|
||||
|
||||
https://github.com/MartijnB/disfuzz-afl
|
||||
|
||||
### AFLDFF (quantumvm)
|
||||
|
||||
A nice GUI for managing AFL jobs.
|
||||
|
||||
https://github.com/quantumvm/AFLDFF
|
||||
|
||||
### afl-launch (Ben Nagy)
|
||||
|
||||
Batch AFL launcher utility with a simple CLI.
|
||||
|
||||
https://github.com/bnagy/afl-launch
|
||||
|
||||
### AFL Utils (rc0r)
|
||||
|
||||
Simplifies the triage of discovered crashes, start parallel instances, etc.
|
||||
|
||||
https://github.com/rc0r/afl-utils
|
||||
|
||||
### AFL crash analyzer (floyd)
|
||||
|
||||
Another crash triage tool:
|
||||
|
||||
https://github.com/floyd-fuh/afl-crash-analyzer
|
||||
|
||||
### afl-extras (fekir)
|
||||
|
||||
Collect data, parallel afl-tmin, startup scripts.
|
||||
|
||||
https://github.com/fekir/afl-extras
|
||||
|
||||
### afl-fuzzing-scripts (Tobias Ospelt)
|
||||
|
||||
Simplifies starting up multiple parallel AFL jobs.
|
||||
|
||||
https://github.com/floyd-fuh/afl-fuzzing-scripts/
|
||||
|
||||
### afl-sid (Jacek Wielemborek)
|
||||
|
||||
Allows users to more conveniently build and deploy AFL via Docker.
|
||||
|
||||
https://github.com/d33tah/afl-sid
|
||||
|
||||
Another Docker-related project:
|
||||
|
||||
https://github.com/ozzyjohnson/docker-afl
|
||||
|
||||
### afl-monitor (Paul S. Ziegler)
|
||||
|
||||
Provides more detailed and versatile statistics about your running AFL jobs.
|
||||
|
||||
https://github.com/reflare/afl-monitor
|
||||
|
||||
### FEXM (Security in Telecommunications)
|
||||
|
||||
Fully automated fuzzing framework, based on AFL
|
||||
|
||||
https://github.com/fgsect/fexm
|
||||
|
||||
## Crash triage, coverage analysis, and other companion tools:
|
||||
|
||||
### afl-crash-analyzer (Tobias Ospelt)
|
||||
|
||||
Makes it easier to navigate and annotate crashing test cases.
|
||||
|
||||
https://github.com/floyd-fuh/afl-crash-analyzer/
|
||||
|
||||
### Crashwalk (Ben Nagy)
|
||||
|
||||
AFL-aware tool to annotate and sort through crashing test cases.
|
||||
|
||||
https://github.com/bnagy/crashwalk
|
||||
|
||||
### afl-cov (Michael Rash)
|
||||
|
||||
Produces human-readable coverage data based on the output queue of afl-fuzz.
|
||||
|
||||
https://github.com/mrash/afl-cov
|
||||
|
||||
### afl-sancov (Bhargava Shastry)
|
||||
|
||||
Similar to afl-cov, but uses clang sanitizer instrumentation.
|
||||
|
||||
https://github.com/bshastry/afl-sancov
|
||||
|
||||
### RecidiVM (Jakub Wilk)
|
||||
|
||||
Makes it easy to estimate memory usage limits when fuzzing with ASAN or MSAN.
|
||||
|
||||
http://jwilk.net/software/recidivm
|
||||
|
||||
### aflize (Jacek Wielemborek)
|
||||
|
||||
Automatically build AFL-enabled versions of Debian packages.
|
||||
|
||||
https://github.com/d33tah/aflize
|
||||
|
||||
### afl-ddmin-mod (Markus Teufelberger)
|
||||
|
||||
A variant of afl-tmin that uses a more sophisticated (but slower)
|
||||
minimization algorithm.
|
||||
|
||||
https://github.com/MarkusTeufelberger/afl-ddmin-mod
|
||||
|
||||
### afl-kit (Kuang-che Wu)
|
||||
|
||||
Replacements for afl-cmin and afl-tmin with additional features, such
|
||||
as the ability to filter crashes based on stderr patterns.
|
||||
|
||||
https://github.com/kcwu/afl-kit
|
||||
|
||||
## Narrow-purpose or experimental:
|
||||
|
||||
### Cygwin support (Ali Rizvi-Santiago)
|
||||
|
||||
Pretty self-explanatory. As per the author, this "mostly" ports AFL to
|
||||
Windows. Field reports welcome!
|
||||
|
||||
https://github.com/arizvisa/afl-cygwin
|
||||
|
||||
### Pause and resume scripts (Ben Nagy)
|
||||
|
||||
Simple automation to suspend and resume groups of fuzzing jobs.
|
||||
|
||||
https://github.com/bnagy/afl-trivia
|
||||
|
||||
### Static binary-only instrumentation (Aleksandar Nikolich)
|
||||
|
||||
Allows black-box binaries to be instrumented statically (i.e., by modifying
|
||||
the binary ahead of the time, rather than translating it on the run). Author
|
||||
reports better performance compared to QEMU, but occasional translation
|
||||
errors with stripped binaries.
|
||||
|
||||
https://github.com/vanhauser-thc/afl-dyninst
|
||||
|
||||
### AFL PIN (Parker Thompson)
|
||||
|
||||
Early-stage Intel PIN instrumentation support (from before we settled on
|
||||
faster-running QEMU).
|
||||
|
||||
https://github.com/mothran/aflpin
|
||||
|
||||
### AFL-style instrumentation in llvm (Kostya Serebryany)
|
||||
|
||||
Allows AFL-equivalent instrumentation to be injected at compiler level.
|
||||
This is currently not supported by AFL as-is, but may be useful in other
|
||||
projects.
|
||||
|
||||
https://code.google.com/p/address-sanitizer/wiki/AsanCoverage#Coverage_counters
|
||||
|
||||
### AFL JS (Han Choongwoo)
|
||||
|
||||
One-off optimizations to speed up the fuzzing of JavaScriptCore (now likely
|
||||
superseded by LLVM deferred forkserver init - see README.llvm.md).
|
||||
|
||||
https://github.com/tunz/afl-fuzz-js
|
||||
|
||||
### AFL harness for fwknop (Michael Rash)
|
||||
|
||||
An example of a fairly involved integration with AFL.
|
||||
|
||||
https://github.com/mrash/fwknop/tree/master/test/afl
|
||||
|
||||
### Building harnesses for DNS servers (Jonathan Foote, Ron Bowes)
|
||||
|
||||
Two articles outlining the general principles and showing some example code.
|
||||
|
||||
https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop
|
||||
https://goo.gl/j9EgFf
|
||||
|
||||
### Fuzzer shell for SQLite (Richard Hipp)
|
||||
|
||||
A simple SQL shell designed specifically for fuzzing the underlying library.
|
||||
|
||||
http://www.sqlite.org/src/artifact/9e7e273da2030371
|
||||
|
||||
### Support for Python mutation modules (Christian Holler)
|
||||
|
||||
now integrated in AFL++, originally from here
|
||||
https://github.com/choller/afl/blob/master/docs/mozilla/python_modules.txt
|
||||
|
||||
### Support for selective instrumentation (Christian Holler)
|
||||
|
||||
now integrated in AFL++, originally from here
|
||||
https://github.com/choller/afl/blob/master/docs/mozilla/partial_instrumentation.txt
|
||||
|
||||
### Syzkaller (Dmitry Vyukov)
|
||||
|
||||
A similar guided approach as applied to fuzzing syscalls:
|
||||
|
||||
https://github.com/google/syzkaller/wiki/Found-Bugs
|
||||
https://github.com/dvyukov/linux/commit/33787098ffaaa83b8a7ccf519913ac5fd6125931
|
||||
http://events.linuxfoundation.org/sites/events/files/slides/AFL%20filesystem%20fuzzing%2C%20Vault%202016_0.pdf
|
||||
|
||||
|
||||
### Kernel Snapshot Fuzzing using Unicornafl (Security in Telecommunications)
|
||||
|
||||
https://github.com/fgsect/unicorefuzz
|
||||
|
||||
### Android support (ele7enxxh)
|
||||
|
||||
Based on a somewhat dated version of AFL:
|
||||
|
||||
https://github.com/ele7enxxh/android-afl
|
||||
|
||||
### CGI wrapper (floyd)
|
||||
|
||||
Facilitates the testing of CGI scripts.
|
||||
|
||||
https://github.com/floyd-fuh/afl-cgi-wrapper
|
||||
|
||||
### Fuzzing difficulty estimation (Marcel Boehme)
|
||||
|
||||
A fork of AFL that tries to quantify the likelihood of finding additional
|
||||
paths or crashes at any point in a fuzzing job.
|
||||
|
||||
https://github.com/mboehme/pythia
|
@ -1,444 +0,0 @@
|
||||
# Understanding the status screen
|
||||
|
||||
This document provides an overview of the status screen - plus tips for
|
||||
troubleshooting any warnings and red text shown in the UI. See README.md for
|
||||
the general instruction manual.
|
||||
|
||||
## A note about colors
|
||||
|
||||
The status screen and error messages use colors to keep things readable and
|
||||
attract your attention to the most important details. For example, red almost
|
||||
always means "consult this doc" :-)
|
||||
|
||||
Unfortunately, the UI will render correctly only if your terminal is using
|
||||
traditional un*x palette (white text on black background) or something close
|
||||
to that.
|
||||
|
||||
If you are using inverse video, you may want to change your settings, say:
|
||||
|
||||
- For GNOME Terminal, go to `Edit > Profile` preferences, select the "colors" tab, and from the list of built-in schemes, choose "white on black".
|
||||
- For the MacOS X Terminal app, open a new window using the "Pro" scheme via the `Shell > New Window` menu (or make "Pro" your default).
|
||||
|
||||
Alternatively, if you really like your current colors, you can edit config.h
|
||||
to comment out USE_COLORS, then do `make clean all`.
|
||||
|
||||
I'm not aware of any other simple way to make this work without causing
|
||||
other side effects - sorry about that.
|
||||
|
||||
With that out of the way, let's talk about what's actually on the screen...
|
||||
|
||||
### The status bar
|
||||
|
||||
```
|
||||
american fuzzy lop ++3.01a (default) [fast] {0}
|
||||
```
|
||||
|
||||
The top line shows you which mode afl-fuzz is running in
|
||||
(normal: "american fuzy lop", crash exploration mode: "peruvian rabbit mode")
|
||||
and the version of afl++.
|
||||
Next to the version is the banner, which, if not set with -T by hand, will
|
||||
either show the binary name being fuzzed, or the -M/-S main/secondary name for
|
||||
parallel fuzzing.
|
||||
Second to last is the power schedule mode being run (default: fast).
|
||||
Finally, the last item is the CPU id.
|
||||
|
||||
### Process timing
|
||||
|
||||
```
|
||||
+----------------------------------------------------+
|
||||
| run time : 0 days, 8 hrs, 32 min, 43 sec |
|
||||
| last new path : 0 days, 0 hrs, 6 min, 40 sec |
|
||||
| last uniq crash : none seen yet |
|
||||
| last uniq hang : 0 days, 1 hrs, 24 min, 32 sec |
|
||||
+----------------------------------------------------+
|
||||
```
|
||||
|
||||
This section is fairly self-explanatory: it tells you how long the fuzzer has
|
||||
been running and how much time has elapsed since its most recent finds. This is
|
||||
broken down into "paths" (a shorthand for test cases that trigger new execution
|
||||
patterns), crashes, and hangs.
|
||||
|
||||
When it comes to timing: there is no hard rule, but most fuzzing jobs should be
|
||||
expected to run for days or weeks; in fact, for a moderately complex project, the
|
||||
first pass will probably take a day or so. Every now and then, some jobs
|
||||
will be allowed to run for months.
|
||||
|
||||
There's one important thing to watch out for: if the tool is not finding new
|
||||
paths within several minutes of starting, you're probably not invoking the
|
||||
target binary correctly and it never gets to parse the input files we're
|
||||
throwing at it; another possible explanations are that the default memory limit
|
||||
(`-m`) is too restrictive, and the program exits after failing to allocate a
|
||||
buffer very early on; or that the input files are patently invalid and always
|
||||
fail a basic header check.
|
||||
|
||||
If there are no new paths showing up for a while, you will eventually see a big
|
||||
red warning in this section, too :-)
|
||||
|
||||
### Overall results
|
||||
|
||||
```
|
||||
+-----------------------+
|
||||
| cycles done : 0 |
|
||||
| total paths : 2095 |
|
||||
| uniq crashes : 0 |
|
||||
| uniq hangs : 19 |
|
||||
+-----------------------+
|
||||
```
|
||||
|
||||
The first field in this section gives you the count of queue passes done so far - that is, the number of times the fuzzer went over all the interesting test
|
||||
cases discovered so far, fuzzed them, and looped back to the very beginning.
|
||||
Every fuzzing session should be allowed to complete at least one cycle; and
|
||||
ideally, should run much longer than that.
|
||||
|
||||
As noted earlier, the first pass can take a day or longer, so sit back and
|
||||
relax.
|
||||
|
||||
To help make the call on when to hit `Ctrl-C`, the cycle counter is color-coded.
|
||||
It is shown in magenta during the first pass, progresses to yellow if new finds
|
||||
are still being made in subsequent rounds, then blue when that ends - and
|
||||
finally, turns green after the fuzzer hasn't been seeing any action for a
|
||||
longer while.
|
||||
|
||||
The remaining fields in this part of the screen should be pretty obvious:
|
||||
there's the number of test cases ("paths") discovered so far, and the number of
|
||||
unique faults. The test cases, crashes, and hangs can be explored in real-time
|
||||
by browsing the output directory, as discussed in README.md.
|
||||
|
||||
### Cycle progress
|
||||
|
||||
```
|
||||
+-------------------------------------+
|
||||
| now processing : 1296 (61.86%) |
|
||||
| paths timed out : 0 (0.00%) |
|
||||
+-------------------------------------+
|
||||
```
|
||||
|
||||
This box tells you how far along the fuzzer is with the current queue cycle: it
|
||||
shows the ID of the test case it is currently working on, plus the number of
|
||||
inputs it decided to ditch because they were persistently timing out.
|
||||
|
||||
The "*" suffix sometimes shown in the first line means that the currently
|
||||
processed path is not "favored" (a property discussed later on).
|
||||
|
||||
### Map coverage
|
||||
|
||||
```
|
||||
+--------------------------------------+
|
||||
| map density : 10.15% / 29.07% |
|
||||
| count coverage : 4.03 bits/tuple |
|
||||
+--------------------------------------+
|
||||
```
|
||||
|
||||
The section provides some trivia about the coverage observed by the
|
||||
instrumentation embedded in the target binary.
|
||||
|
||||
The first line in the box tells you how many branch tuples we have already
|
||||
hit, in proportion to how much the bitmap can hold. The number on the left
|
||||
describes the current input; the one on the right is the value for the entire
|
||||
input corpus.
|
||||
|
||||
Be wary of extremes:
|
||||
|
||||
- Absolute numbers below 200 or so suggest one of three things: that the
|
||||
program is extremely simple; that it is not instrumented properly (e.g.,
|
||||
due to being linked against a non-instrumented copy of the target
|
||||
library); or that it is bailing out prematurely on your input test cases.
|
||||
The fuzzer will try to mark this in pink, just to make you aware.
|
||||
- Percentages over 70% may very rarely happen with very complex programs
|
||||
that make heavy use of template-generated code.
|
||||
Because high bitmap density makes it harder for the fuzzer to reliably
|
||||
discern new program states, I recommend recompiling the binary with
|
||||
`AFL_INST_RATIO=10` or so and trying again (see env_variables.md).
|
||||
The fuzzer will flag high percentages in red. Chances are, you will never
|
||||
see that unless you're fuzzing extremely hairy software (say, v8, perl,
|
||||
ffmpeg).
|
||||
|
||||
The other line deals with the variability in tuple hit counts seen in the
|
||||
binary. In essence, if every taken branch is always taken a fixed number of
|
||||
times for all the inputs we have tried, this will read `1.00`. As we manage
|
||||
to trigger other hit counts for every branch, the needle will start to move
|
||||
toward `8.00` (every bit in the 8-bit map hit), but will probably never
|
||||
reach that extreme.
|
||||
|
||||
Together, the values can be useful for comparing the coverage of several
|
||||
different fuzzing jobs that rely on the same instrumented binary.
|
||||
|
||||
### Stage progress
|
||||
|
||||
```
|
||||
+-------------------------------------+
|
||||
| now trying : interest 32/8 |
|
||||
| stage execs : 3996/34.4k (11.62%) |
|
||||
| total execs : 27.4M |
|
||||
| exec speed : 891.7/sec |
|
||||
+-------------------------------------+
|
||||
```
|
||||
|
||||
This part gives you an in-depth peek at what the fuzzer is actually doing right
|
||||
now. It tells you about the current stage, which can be any of:
|
||||
|
||||
- calibration - a pre-fuzzing stage where the execution path is examined
|
||||
to detect anomalies, establish baseline execution speed, and so on. Executed
|
||||
very briefly whenever a new find is being made.
|
||||
- trim L/S - another pre-fuzzing stage where the test case is trimmed to the
|
||||
shortest form that still produces the same execution path. The length (L)
|
||||
and stepover (S) are chosen in general relationship to file size.
|
||||
- bitflip L/S - deterministic bit flips. There are L bits toggled at any given
|
||||
time, walking the input file with S-bit increments. The current L/S variants
|
||||
are: `1/1`, `2/1`, `4/1`, `8/8`, `16/8`, `32/8`.
|
||||
- arith L/8 - deterministic arithmetics. The fuzzer tries to subtract or add
|
||||
small integers to 8-, 16-, and 32-bit values. The stepover is always 8 bits.
|
||||
- interest L/8 - deterministic value overwrite. The fuzzer has a list of known
|
||||
"interesting" 8-, 16-, and 32-bit values to try. The stepover is 8 bits.
|
||||
- extras - deterministic injection of dictionary terms. This can be shown as
|
||||
"user" or "auto", depending on whether the fuzzer is using a user-supplied
|
||||
dictionary (`-x`) or an auto-created one. You will also see "over" or "insert",
|
||||
depending on whether the dictionary words overwrite existing data or are
|
||||
inserted by offsetting the remaining data to accommodate their length.
|
||||
- havoc - a sort-of-fixed-length cycle with stacked random tweaks. The
|
||||
operations attempted during this stage include bit flips, overwrites with
|
||||
random and "interesting" integers, block deletion, block duplication, plus
|
||||
assorted dictionary-related operations (if a dictionary is supplied in the
|
||||
first place).
|
||||
- splice - a last-resort strategy that kicks in after the first full queue
|
||||
cycle with no new paths. It is equivalent to 'havoc', except that it first
|
||||
splices together two random inputs from the queue at some arbitrarily
|
||||
selected midpoint.
|
||||
- sync - a stage used only when `-M` or `-S` is set (see parallel_fuzzing.md).
|
||||
No real fuzzing is involved, but the tool scans the output from other
|
||||
fuzzers and imports test cases as necessary. The first time this is done,
|
||||
it may take several minutes or so.
|
||||
|
||||
The remaining fields should be fairly self-evident: there's the exec count
|
||||
progress indicator for the current stage, a global exec counter, and a
|
||||
benchmark for the current program execution speed. This may fluctuate from
|
||||
one test case to another, but the benchmark should be ideally over 500 execs/sec
|
||||
most of the time - and if it stays below 100, the job will probably take very
|
||||
long.
|
||||
|
||||
The fuzzer will explicitly warn you about slow targets, too. If this happens,
|
||||
see the [perf_tips.md](perf_tips.md) file included with the fuzzer for ideas on how to speed
|
||||
things up.
|
||||
|
||||
### Findings in depth
|
||||
|
||||
```
|
||||
+--------------------------------------+
|
||||
| favored paths : 879 (41.96%) |
|
||||
| new edges on : 423 (20.19%) |
|
||||
| total crashes : 0 (0 unique) |
|
||||
| total tmouts : 24 (19 unique) |
|
||||
+--------------------------------------+
|
||||
```
|
||||
|
||||
This gives you several metrics that are of interest mostly to complete nerds.
|
||||
The section includes the number of paths that the fuzzer likes the most based
|
||||
on a minimization algorithm baked into the code (these will get considerably
|
||||
more air time), and the number of test cases that actually resulted in better
|
||||
edge coverage (versus just pushing the branch hit counters up). There are also
|
||||
additional, more detailed counters for crashes and timeouts.
|
||||
|
||||
Note that the timeout counter is somewhat different from the hang counter; this
|
||||
one includes all test cases that exceeded the timeout, even if they did not
|
||||
exceed it by a margin sufficient to be classified as hangs.
|
||||
|
||||
### Fuzzing strategy yields
|
||||
|
||||
```
|
||||
+-----------------------------------------------------+
|
||||
| bit flips : 57/289k, 18/289k, 18/288k |
|
||||
| byte flips : 0/36.2k, 4/35.7k, 7/34.6k |
|
||||
| arithmetics : 53/2.54M, 0/537k, 0/55.2k |
|
||||
| known ints : 8/322k, 12/1.32M, 10/1.70M |
|
||||
| dictionary : 9/52k, 1/53k, 1/24k |
|
||||
|havoc/splice : 1903/20.0M, 0/0 |
|
||||
|py/custom/rq : unused, 53/2.54M, unused |
|
||||
| trim/eff : 20.31%/9201, 17.05% |
|
||||
+-----------------------------------------------------+
|
||||
```
|
||||
|
||||
This is just another nerd-targeted section keeping track of how many paths we
|
||||
have netted, in proportion to the number of execs attempted, for each of the
|
||||
fuzzing strategies discussed earlier on. This serves to convincingly validate
|
||||
assumptions about the usefulness of the various approaches taken by afl-fuzz.
|
||||
|
||||
The trim strategy stats in this section are a bit different than the rest.
|
||||
The first number in this line shows the ratio of bytes removed from the input
|
||||
files; the second one corresponds to the number of execs needed to achieve this
|
||||
goal. Finally, the third number shows the proportion of bytes that, although
|
||||
not possible to remove, were deemed to have no effect and were excluded from
|
||||
some of the more expensive deterministic fuzzing steps.
|
||||
|
||||
Note that when deterministic mutation mode is off (which is the default
|
||||
because it is not very efficient) the first five lines display
|
||||
"disabled (default, enable with -D)".
|
||||
|
||||
Only what is activated will have counter shown.
|
||||
|
||||
### Path geometry
|
||||
|
||||
```
|
||||
+---------------------+
|
||||
| levels : 5 |
|
||||
| pending : 1570 |
|
||||
| pend fav : 583 |
|
||||
| own finds : 0 |
|
||||
| imported : 0 |
|
||||
| stability : 100.00% |
|
||||
+---------------------+
|
||||
```
|
||||
|
||||
The first field in this section tracks the path depth reached through the
|
||||
guided fuzzing process. In essence: the initial test cases supplied by the
|
||||
user are considered "level 1". The test cases that can be derived from that
|
||||
through traditional fuzzing are considered "level 2"; the ones derived by
|
||||
using these as inputs to subsequent fuzzing rounds are "level 3"; and so forth.
|
||||
The maximum depth is therefore a rough proxy for how much value you're getting
|
||||
out of the instrumentation-guided approach taken by afl-fuzz.
|
||||
|
||||
The next field shows you the number of inputs that have not gone through any
|
||||
fuzzing yet. The same stat is also given for "favored" entries that the fuzzer
|
||||
really wants to get to in this queue cycle (the non-favored entries may have to
|
||||
wait a couple of cycles to get their chance).
|
||||
|
||||
Next, we have the number of new paths found during this fuzzing section and
|
||||
imported from other fuzzer instances when doing parallelized fuzzing; and the
|
||||
extent to which identical inputs appear to sometimes produce variable behavior
|
||||
in the tested binary.
|
||||
|
||||
That last bit is actually fairly interesting: it measures the consistency of
|
||||
observed traces. If a program always behaves the same for the same input data,
|
||||
it will earn a score of 100%. When the value is lower but still shown in purple,
|
||||
the fuzzing process is unlikely to be negatively affected. If it goes into red,
|
||||
you may be in trouble, since AFL will have difficulty discerning between
|
||||
meaningful and "phantom" effects of tweaking the input file.
|
||||
|
||||
Now, most targets will just get a 100% score, but when you see lower figures,
|
||||
there are several things to look at:
|
||||
|
||||
- The use of uninitialized memory in conjunction with some intrinsic sources
|
||||
of entropy in the tested binary. Harmless to AFL, but could be indicative
|
||||
of a security bug.
|
||||
- Attempts to manipulate persistent resources, such as left over temporary
|
||||
files or shared memory objects. This is usually harmless, but you may want
|
||||
to double-check to make sure the program isn't bailing out prematurely.
|
||||
Running out of disk space, SHM handles, or other global resources can
|
||||
trigger this, too.
|
||||
- Hitting some functionality that is actually designed to behave randomly.
|
||||
Generally harmless. For example, when fuzzing sqlite, an input like
|
||||
`select random();` will trigger a variable execution path.
|
||||
- Multiple threads executing at once in semi-random order. This is harmless
|
||||
when the 'stability' metric stays over 90% or so, but can become an issue
|
||||
if not. Here's what to try:
|
||||
* Use afl-clang-fast from [instrumentation](../instrumentation/) - it uses a thread-local tracking
|
||||
model that is less prone to concurrency issues,
|
||||
* See if the target can be compiled or run without threads. Common
|
||||
`./configure` options include `--without-threads`, `--disable-pthreads`, or
|
||||
`--disable-openmp`.
|
||||
* Replace pthreads with GNU Pth (https://www.gnu.org/software/pth/), which
|
||||
allows you to use a deterministic scheduler.
|
||||
- In persistent mode, minor drops in the "stability" metric can be normal,
|
||||
because not all the code behaves identically when re-entered; but major
|
||||
dips may signify that the code within `__AFL_LOOP()` is not behaving
|
||||
correctly on subsequent iterations (e.g., due to incomplete clean-up or
|
||||
reinitialization of the state) and that most of the fuzzing effort goes
|
||||
to waste.
|
||||
|
||||
The paths where variable behavior is detected are marked with a matching entry
|
||||
in the `<out_dir>/queue/.state/variable_behavior/` directory, so you can look
|
||||
them up easily.
|
||||
|
||||
### CPU load
|
||||
|
||||
```
|
||||
[cpu: 25%]
|
||||
```
|
||||
|
||||
This tiny widget shows the apparent CPU utilization on the local system. It is
|
||||
calculated by taking the number of processes in the "runnable" state, and then
|
||||
comparing it to the number of logical cores on the system.
|
||||
|
||||
If the value is shown in green, you are using fewer CPU cores than available on
|
||||
your system and can probably parallelize to improve performance; for tips on
|
||||
how to do that, see parallel_fuzzing.md.
|
||||
|
||||
If the value is shown in red, your CPU is *possibly* oversubscribed, and
|
||||
running additional fuzzers may not give you any benefits.
|
||||
|
||||
Of course, this benchmark is very simplistic; it tells you how many processes
|
||||
are ready to run, but not how resource-hungry they may be. It also doesn't
|
||||
distinguish between physical cores, logical cores, and virtualized CPUs; the
|
||||
performance characteristics of each of these will differ quite a bit.
|
||||
|
||||
If you want a more accurate measurement, you can run the `afl-gotcpu` utility from the command line.
|
||||
|
||||
### Addendum: status and plot files
|
||||
|
||||
For unattended operation, some of the key status screen information can be also
|
||||
found in a machine-readable format in the fuzzer_stats file in the output
|
||||
directory. This includes:
|
||||
|
||||
- `start_time` - unix time indicating the start time of afl-fuzz
|
||||
- `last_update` - unix time corresponding to the last update of this file
|
||||
- `run_time` - run time in seconds to the last update of this file
|
||||
- `fuzzer_pid` - PID of the fuzzer process
|
||||
- `cycles_done` - queue cycles completed so far
|
||||
- `cycles_wo_finds` - number of cycles without any new paths found
|
||||
- `execs_done` - number of execve() calls attempted
|
||||
- `execs_per_sec` - overall number of execs per second
|
||||
- `paths_total` - total number of entries in the queue
|
||||
- `paths_favored` - number of queue entries that are favored
|
||||
- `paths_found` - number of entries discovered through local fuzzing
|
||||
- `paths_imported` - number of entries imported from other instances
|
||||
- `max_depth` - number of levels in the generated data set
|
||||
- `cur_path` - currently processed entry number
|
||||
- `pending_favs` - number of favored entries still waiting to be fuzzed
|
||||
- `pending_total` - number of all entries waiting to be fuzzed
|
||||
- `variable_paths` - number of test cases showing variable behavior
|
||||
- `stability` - percentage of bitmap bytes that behave consistently
|
||||
- `bitmap_cvg` - percentage of edge coverage found in the map so far
|
||||
- `unique_crashes` - number of unique crashes recorded
|
||||
- `unique_hangs` - number of unique hangs encountered
|
||||
- `last_path` - seconds since the last path was found
|
||||
- `last_crash` - seconds since the last crash was found
|
||||
- `last_hang` - seconds since the last hang was found
|
||||
- `execs_since_crash` - execs since the last crash was found
|
||||
- `exec_timeout` - the -t command line value
|
||||
- `slowest_exec_ms` - real time of the slowest execution in ms
|
||||
- `peak_rss_mb` - max rss usage reached during fuzzing in MB
|
||||
- `edges_found` - how many edges have been found
|
||||
- `var_byte_count` - how many edges are non-deterministic
|
||||
- `afl_banner` - banner text (e.g. the target name)
|
||||
- `afl_version` - the version of afl used
|
||||
- `target_mode` - default, persistent, qemu, unicorn, non-instrumented
|
||||
- `command_line` - full command line used for the fuzzing session
|
||||
|
||||
Most of these map directly to the UI elements discussed earlier on.
|
||||
|
||||
On top of that, you can also find an entry called `plot_data`, containing a
|
||||
plottable history for most of these fields. If you have gnuplot installed, you
|
||||
can turn this into a nice progress report with the included `afl-plot` tool.
|
||||
|
||||
|
||||
### Addendum: Automatically send metrics with StatsD
|
||||
|
||||
In a CI environment or when running multiple fuzzers, it can be tedious to
|
||||
log into each of them or deploy scripts to read the fuzzer statistics.
|
||||
Using `AFL_STATSD` (and the other related environment variables `AFL_STATSD_HOST`,
|
||||
`AFL_STATSD_PORT`, `AFL_STATSD_TAGS_FLAVOR`) you can automatically send metrics
|
||||
to your favorite StatsD server. Depending on your StatsD server you will be able
|
||||
to monitor, trigger alerts or perform actions based on these metrics (e.g: alert on
|
||||
slow exec/s for a new build, threshold of crashes, time since last crash > X, etc).
|
||||
|
||||
The selected metrics are a subset of all the metrics found in the status and in
|
||||
the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
|
||||
`execs_done`,`execs_per_sec`, `paths_total`, `paths_favored`, `paths_found`,
|
||||
`paths_imported`, `max_depth`, `cur_path`, `pending_favs`, `pending_total`,
|
||||
`variable_paths`, `unique_crashes`, `unique_hangs`, `total_crashes`,
|
||||
`slowest_exec_ms`, `edges_found`, `var_byte_count`, `havoc_expansion`.
|
||||
Their definitions can be found in the addendum above.
|
||||
|
||||
When using multiple fuzzer instances with StatsD it is *strongly* recommended to setup
|
||||
the flavor (AFL_STATSD_TAGS_FLAVOR) to match your StatsD server. This will allow you
|
||||
to see individual fuzzer performance, detect bad ones, see the progress of each
|
||||
strategy...
|
@ -1,550 +0,0 @@
|
||||
# Technical "whitepaper" for afl-fuzz
|
||||
|
||||
|
||||
NOTE: this document is rather outdated!
|
||||
|
||||
|
||||
This document provides a quick overview of the guts of American Fuzzy Lop.
|
||||
See README.md for the general instruction manual; and for a discussion of
|
||||
motivations and design goals behind AFL, see historical_notes.md.
|
||||
|
||||
## 0. Design statement
|
||||
|
||||
American Fuzzy Lop does its best not to focus on any singular principle of
|
||||
operation and not be a proof-of-concept for any specific theory. The tool can
|
||||
be thought of as a collection of hacks that have been tested in practice,
|
||||
found to be surprisingly effective, and have been implemented in the simplest,
|
||||
most robust way I could think of at the time.
|
||||
|
||||
Many of the resulting features are made possible thanks to the availability of
|
||||
lightweight instrumentation that served as a foundation for the tool, but this
|
||||
mechanism should be thought of merely as a means to an end. The only true
|
||||
governing principles are speed, reliability, and ease of use.
|
||||
|
||||
## 1. Coverage measurements
|
||||
|
||||
The instrumentation injected into compiled programs captures branch (edge)
|
||||
coverage, along with coarse branch-taken hit counts. The code injected at
|
||||
branch points is essentially equivalent to:
|
||||
|
||||
```c
|
||||
cur_location = <COMPILE_TIME_RANDOM>;
|
||||
shared_mem[cur_location ^ prev_location]++;
|
||||
prev_location = cur_location >> 1;
|
||||
```
|
||||
|
||||
The `cur_location` value is generated randomly to simplify the process of
|
||||
linking complex projects and keep the XOR output distributed uniformly.
|
||||
|
||||
The `shared_mem[]` array is a 64 kB SHM region passed to the instrumented binary
|
||||
by the caller. Every byte set in the output map can be thought of as a hit for
|
||||
a particular (`branch_src`, `branch_dst`) tuple in the instrumented code.
|
||||
|
||||
The size of the map is chosen so that collisions are sporadic with almost all
|
||||
of the intended targets, which usually sport between 2k and 10k discoverable
|
||||
branch points:
|
||||
|
||||
```
|
||||
Branch cnt | Colliding tuples | Example targets
|
||||
------------+------------------+-----------------
|
||||
1,000 | 0.75% | giflib, lzo
|
||||
2,000 | 1.5% | zlib, tar, xz
|
||||
5,000 | 3.5% | libpng, libwebp
|
||||
10,000 | 7% | libxml
|
||||
20,000 | 14% | sqlite
|
||||
50,000 | 30% | -
|
||||
```
|
||||
|
||||
At the same time, its size is small enough to allow the map to be analyzed
|
||||
in a matter of microseconds on the receiving end, and to effortlessly fit
|
||||
within L2 cache.
|
||||
|
||||
This form of coverage provides considerably more insight into the execution
|
||||
path of the program than simple block coverage. In particular, it trivially
|
||||
distinguishes between the following execution traces:
|
||||
|
||||
```
|
||||
A -> B -> C -> D -> E (tuples: AB, BC, CD, DE)
|
||||
A -> B -> D -> C -> E (tuples: AB, BD, DC, CE)
|
||||
```
|
||||
|
||||
This aids the discovery of subtle fault conditions in the underlying code,
|
||||
because security vulnerabilities are more often associated with unexpected
|
||||
or incorrect state transitions than with merely reaching a new basic block.
|
||||
|
||||
The reason for the shift operation in the last line of the pseudocode shown
|
||||
earlier in this section is to preserve the directionality of tuples (without
|
||||
this, A ^ B would be indistinguishable from B ^ A) and to retain the identity
|
||||
of tight loops (otherwise, A ^ A would be obviously equal to B ^ B).
|
||||
|
||||
The absence of simple saturating arithmetic opcodes on Intel CPUs means that
|
||||
the hit counters can sometimes wrap around to zero. Since this is a fairly
|
||||
unlikely and localized event, it's seen as an acceptable performance trade-off.
|
||||
|
||||
### 2. Detecting new behaviors
|
||||
|
||||
The fuzzer maintains a global map of tuples seen in previous executions; this
|
||||
data can be rapidly compared with individual traces and updated in just a couple
|
||||
of dword- or qword-wide instructions and a simple loop.
|
||||
|
||||
When a mutated input produces an execution trace containing new tuples, the
|
||||
corresponding input file is preserved and routed for additional processing
|
||||
later on (see section #3). Inputs that do not trigger new local-scale state
|
||||
transitions in the execution trace (i.e., produce no new tuples) are discarded,
|
||||
even if their overall control flow sequence is unique.
|
||||
|
||||
This approach allows for a very fine-grained and long-term exploration of
|
||||
program state while not having to perform any computationally intensive and
|
||||
fragile global comparisons of complex execution traces, and while avoiding the
|
||||
scourge of path explosion.
|
||||
|
||||
To illustrate the properties of the algorithm, consider that the second trace
|
||||
shown below would be considered substantially new because of the presence of
|
||||
new tuples (CA, AE):
|
||||
|
||||
```
|
||||
#1: A -> B -> C -> D -> E
|
||||
#2: A -> B -> C -> A -> E
|
||||
```
|
||||
|
||||
At the same time, with #2 processed, the following pattern will not be seen
|
||||
as unique, despite having a markedly different overall execution path:
|
||||
|
||||
```
|
||||
#3: A -> B -> C -> A -> B -> C -> A -> B -> C -> D -> E
|
||||
```
|
||||
|
||||
In addition to detecting new tuples, the fuzzer also considers coarse tuple
|
||||
hit counts. These are divided into several buckets:
|
||||
|
||||
```
|
||||
1, 2, 3, 4-7, 8-15, 16-31, 32-127, 128+
|
||||
```
|
||||
|
||||
To some extent, the number of buckets is an implementation artifact: it allows
|
||||
an in-place mapping of an 8-bit counter generated by the instrumentation to
|
||||
an 8-position bitmap relied on by the fuzzer executable to keep track of the
|
||||
already-seen execution counts for each tuple.
|
||||
|
||||
Changes within the range of a single bucket are ignored; transition from one
|
||||
bucket to another is flagged as an interesting change in program control flow,
|
||||
and is routed to the evolutionary process outlined in the section below.
|
||||
|
||||
The hit count behavior provides a way to distinguish between potentially
|
||||
interesting control flow changes, such as a block of code being executed
|
||||
twice when it was normally hit only once. At the same time, it is fairly
|
||||
insensitive to empirically less notable changes, such as a loop going from
|
||||
47 cycles to 48. The counters also provide some degree of "accidental"
|
||||
immunity against tuple collisions in dense trace maps.
|
||||
|
||||
The execution is policed fairly heavily through memory and execution time
|
||||
limits; by default, the timeout is set at 5x the initially-calibrated
|
||||
execution speed, rounded up to 20 ms. The aggressive timeouts are meant to
|
||||
prevent dramatic fuzzer performance degradation by descending into tarpits
|
||||
that, say, improve coverage by 1% while being 100x slower; we pragmatically
|
||||
reject them and hope that the fuzzer will find a less expensive way to reach
|
||||
the same code. Empirical testing strongly suggests that more generous time
|
||||
limits are not worth the cost.
|
||||
|
||||
## 3. Evolving the input queue
|
||||
|
||||
Mutated test cases that produced new state transitions within the program are
|
||||
added to the input queue and used as a starting point for future rounds of
|
||||
fuzzing. They supplement, but do not automatically replace, existing finds.
|
||||
|
||||
In contrast to more greedy genetic algorithms, this approach allows the tool
|
||||
to progressively explore various disjoint and possibly mutually incompatible
|
||||
features of the underlying data format, as shown in this image:
|
||||
|
||||

|
||||
|
||||
Several practical examples of the results of this algorithm are discussed
|
||||
here:
|
||||
|
||||
http://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html
|
||||
http://lcamtuf.blogspot.com/2014/11/afl-fuzz-nobody-expects-cdata-sections.html
|
||||
|
||||
The synthetic corpus produced by this process is essentially a compact
|
||||
collection of "hmm, this does something new!" input files, and can be used to
|
||||
seed any other testing processes down the line (for example, to manually
|
||||
stress-test resource-intensive desktop apps).
|
||||
|
||||
With this approach, the queue for most targets grows to somewhere between 1k
|
||||
and 10k entries; approximately 10-30% of this is attributable to the discovery
|
||||
of new tuples, and the remainder is associated with changes in hit counts.
|
||||
|
||||
The following table compares the relative ability to discover file syntax and
|
||||
explore program states when using several different approaches to guided
|
||||
fuzzing. The instrumented target was GNU patch 2.7k.3 compiled with `-O3` and
|
||||
seeded with a dummy text file; the session consisted of a single pass over the
|
||||
input queue with afl-fuzz:
|
||||
|
||||
```
|
||||
Fuzzer guidance | Blocks | Edges | Edge hit | Highest-coverage
|
||||
strategy used | reached | reached | cnt var | test case generated
|
||||
------------------+---------+---------+----------+---------------------------
|
||||
(Initial file) | 156 | 163 | 1.00 | (none)
|
||||
| | | |
|
||||
Blind fuzzing S | 182 | 205 | 2.23 | First 2 B of RCS diff
|
||||
Blind fuzzing L | 228 | 265 | 2.23 | First 4 B of -c mode diff
|
||||
Block coverage | 855 | 1,130 | 1.57 | Almost-valid RCS diff
|
||||
Edge coverage | 1,452 | 2,070 | 2.18 | One-chunk -c mode diff
|
||||
AFL model | 1,765 | 2,597 | 4.99 | Four-chunk -c mode diff
|
||||
```
|
||||
|
||||
The first entry for blind fuzzing ("S") corresponds to executing just a single
|
||||
round of testing; the second set of figures ("L") shows the fuzzer running in a
|
||||
loop for a number of execution cycles comparable with that of the instrumented
|
||||
runs, which required more time to fully process the growing queue.
|
||||
|
||||
Roughly similar results have been obtained in a separate experiment where the
|
||||
fuzzer was modified to compile out all the random fuzzing stages and leave just
|
||||
a series of rudimentary, sequential operations such as walking bit flips.
|
||||
Because this mode would be incapable of altering the size of the input file,
|
||||
the sessions were seeded with a valid unified diff:
|
||||
|
||||
```
|
||||
Queue extension | Blocks | Edges | Edge hit | Number of unique
|
||||
strategy used | reached | reached | cnt var | crashes found
|
||||
------------------+---------+---------+----------+------------------
|
||||
(Initial file) | 624 | 717 | 1.00 | -
|
||||
| | | |
|
||||
Blind fuzzing | 1,101 | 1,409 | 1.60 | 0
|
||||
Block coverage | 1,255 | 1,649 | 1.48 | 0
|
||||
Edge coverage | 1,259 | 1,734 | 1.72 | 0
|
||||
AFL model | 1,452 | 2,040 | 3.16 | 1
|
||||
```
|
||||
|
||||
At noted earlier on, some of the prior work on genetic fuzzing relied on
|
||||
maintaining a single test case and evolving it to maximize coverage. At least
|
||||
in the tests described above, this "greedy" approach appears to confer no
|
||||
substantial benefits over blind fuzzing strategies.
|
||||
|
||||
### 4. Culling the corpus
|
||||
|
||||
The progressive state exploration approach outlined above means that some of
|
||||
the test cases synthesized later on in the game may have edge coverage that
|
||||
is a strict superset of the coverage provided by their ancestors.
|
||||
|
||||
To optimize the fuzzing effort, AFL periodically re-evaluates the queue using a
|
||||
fast algorithm that selects a smaller subset of test cases that still cover
|
||||
every tuple seen so far, and whose characteristics make them particularly
|
||||
favorable to the tool.
|
||||
|
||||
The algorithm works by assigning every queue entry a score proportional to its
|
||||
execution latency and file size; and then selecting lowest-scoring candidates
|
||||
for each tuple.
|
||||
|
||||
The tuples are then processed sequentially using a simple workflow:
|
||||
|
||||
1) Find next tuple not yet in the temporary working set,
|
||||
2) Locate the winning queue entry for this tuple,
|
||||
3) Register *all* tuples present in that entry's trace in the working set,
|
||||
4) Go to #1 if there are any missing tuples in the set.
|
||||
|
||||
The generated corpus of "favored" entries is usually 5-10x smaller than the
|
||||
starting data set. Non-favored entries are not discarded, but they are skipped
|
||||
with varying probabilities when encountered in the queue:
|
||||
|
||||
- If there are new, yet-to-be-fuzzed favorites present in the queue, 99%
|
||||
of non-favored entries will be skipped to get to the favored ones.
|
||||
- If there are no new favorites:
|
||||
* If the current non-favored entry was fuzzed before, it will be skipped
|
||||
95% of the time.
|
||||
* If it hasn't gone through any fuzzing rounds yet, the odds of skipping
|
||||
drop down to 75%.
|
||||
|
||||
Based on empirical testing, this provides a reasonable balance between queue
|
||||
cycling speed and test case diversity.
|
||||
|
||||
Slightly more sophisticated but much slower culling can be performed on input
|
||||
or output corpora with `afl-cmin`. This tool permanently discards the redundant
|
||||
entries and produces a smaller corpus suitable for use with `afl-fuzz` or
|
||||
external tools.
|
||||
|
||||
## 5. Trimming input files
|
||||
|
||||
File size has a dramatic impact on fuzzing performance, both because large
|
||||
files make the target binary slower, and because they reduce the likelihood
|
||||
that a mutation would touch important format control structures, rather than
|
||||
redundant data blocks. This is discussed in more detail in perf_tips.md.
|
||||
|
||||
The possibility that the user will provide a low-quality starting corpus aside,
|
||||
some types of mutations can have the effect of iteratively increasing the size
|
||||
of the generated files, so it is important to counter this trend.
|
||||
|
||||
Luckily, the instrumentation feedback provides a simple way to automatically
|
||||
trim down input files while ensuring that the changes made to the files have no
|
||||
impact on the execution path.
|
||||
|
||||
The built-in trimmer in afl-fuzz attempts to sequentially remove blocks of data
|
||||
with variable length and stepover; any deletion that doesn't affect the checksum
|
||||
of the trace map is committed to disk. The trimmer is not designed to be
|
||||
particularly thorough; instead, it tries to strike a balance between precision
|
||||
and the number of `execve()` calls spent on the process, selecting the block size
|
||||
and stepover to match. The average per-file gains are around 5-20%.
|
||||
|
||||
The standalone `afl-tmin` tool uses a more exhaustive, iterative algorithm, and
|
||||
also attempts to perform alphabet normalization on the trimmed files. The
|
||||
operation of `afl-tmin` is as follows.
|
||||
|
||||
First, the tool automatically selects the operating mode. If the initial input
|
||||
crashes the target binary, afl-tmin will run in non-instrumented mode, simply
|
||||
keeping any tweaks that produce a simpler file but still crash the target.
|
||||
The same mode is used for hangs, if `-H` (hang mode) is specified.
|
||||
If the target is non-crashing, the tool uses an instrumented mode and keeps only
|
||||
the tweaks that produce exactly the same execution path.
|
||||
|
||||
The actual minimization algorithm is:
|
||||
|
||||
1) Attempt to zero large blocks of data with large stepovers. Empirically,
|
||||
this is shown to reduce the number of execs by preempting finer-grained
|
||||
efforts later on.
|
||||
2) Perform a block deletion pass with decreasing block sizes and stepovers,
|
||||
binary-search-style.
|
||||
3) Perform alphabet normalization by counting unique characters and trying
|
||||
to bulk-replace each with a zero value.
|
||||
4) As a last result, perform byte-by-byte normalization on non-zero bytes.
|
||||
|
||||
Instead of zeroing with a 0x00 byte, `afl-tmin` uses the ASCII digit '0'. This
|
||||
is done because such a modification is much less likely to interfere with
|
||||
text parsing, so it is more likely to result in successful minimization of
|
||||
text files.
|
||||
|
||||
The algorithm used here is less involved than some other test case
|
||||
minimization approaches proposed in academic work, but requires far fewer
|
||||
executions and tends to produce comparable results in most real-world
|
||||
applications.
|
||||
|
||||
## 6. Fuzzing strategies
|
||||
|
||||
The feedback provided by the instrumentation makes it easy to understand the
|
||||
value of various fuzzing strategies and optimize their parameters so that they
|
||||
work equally well across a wide range of file types. The strategies used by
|
||||
afl-fuzz are generally format-agnostic and are discussed in more detail here:
|
||||
|
||||
http://lcamtuf.blogspot.com/2014/08/binary-fuzzing-strategies-what-works.html
|
||||
|
||||
It is somewhat notable that especially early on, most of the work done by
|
||||
`afl-fuzz` is actually highly deterministic, and progresses to random stacked
|
||||
modifications and test case splicing only at a later stage. The deterministic
|
||||
strategies include:
|
||||
|
||||
- Sequential bit flips with varying lengths and stepovers,
|
||||
- Sequential addition and subtraction of small integers,
|
||||
- Sequential insertion of known interesting integers (`0`, `1`, `INT_MAX`, etc),
|
||||
|
||||
The purpose of opening with deterministic steps is related to their tendency to
|
||||
produce compact test cases and small diffs between the non-crashing and crashing
|
||||
inputs.
|
||||
|
||||
With deterministic fuzzing out of the way, the non-deterministic steps include
|
||||
stacked bit flips, insertions, deletions, arithmetics, and splicing of different
|
||||
test cases.
|
||||
|
||||
The relative yields and `execve()` costs of all these strategies have been
|
||||
investigated and are discussed in the aforementioned blog post.
|
||||
|
||||
For the reasons discussed in historical_notes.md (chiefly, performance,
|
||||
simplicity, and reliability), AFL generally does not try to reason about the
|
||||
relationship between specific mutations and program states; the fuzzing steps
|
||||
are nominally blind, and are guided only by the evolutionary design of the
|
||||
input queue.
|
||||
|
||||
That said, there is one (trivial) exception to this rule: when a new queue
|
||||
entry goes through the initial set of deterministic fuzzing steps, and tweaks to
|
||||
some regions in the file are observed to have no effect on the checksum of the
|
||||
execution path, they may be excluded from the remaining phases of
|
||||
deterministic fuzzing - and the fuzzer may proceed straight to random tweaks.
|
||||
Especially for verbose, human-readable data formats, this can reduce the number
|
||||
of execs by 10-40% or so without an appreciable drop in coverage. In extreme
|
||||
cases, such as normally block-aligned tar archives, the gains can be as high as
|
||||
90%.
|
||||
|
||||
Because the underlying "effector maps" are local every queue entry and remain
|
||||
in force only during deterministic stages that do not alter the size or the
|
||||
general layout of the underlying file, this mechanism appears to work very
|
||||
reliably and proved to be simple to implement.
|
||||
|
||||
## 7. Dictionaries
|
||||
|
||||
The feedback provided by the instrumentation makes it easy to automatically
|
||||
identify syntax tokens in some types of input files, and to detect that certain
|
||||
combinations of predefined or auto-detected dictionary terms constitute a
|
||||
valid grammar for the tested parser.
|
||||
|
||||
A discussion of how these features are implemented within afl-fuzz can be found
|
||||
here:
|
||||
|
||||
http://lcamtuf.blogspot.com/2015/01/afl-fuzz-making-up-grammar-with.html
|
||||
|
||||
In essence, when basic, typically easily-obtained syntax tokens are combined
|
||||
together in a purely random manner, the instrumentation and the evolutionary
|
||||
design of the queue together provide a feedback mechanism to differentiate
|
||||
between meaningless mutations and ones that trigger new behaviors in the
|
||||
instrumented code - and to incrementally build more complex syntax on top of
|
||||
this discovery.
|
||||
|
||||
The dictionaries have been shown to enable the fuzzer to rapidly reconstruct
|
||||
the grammar of highly verbose and complex languages such as JavaScript, SQL,
|
||||
or XML; several examples of generated SQL statements are given in the blog
|
||||
post mentioned above.
|
||||
|
||||
Interestingly, the AFL instrumentation also allows the fuzzer to automatically
|
||||
isolate syntax tokens already present in an input file. It can do so by looking
|
||||
for run of bytes that, when flipped, produce a consistent change to the
|
||||
program's execution path; this is suggestive of an underlying atomic comparison
|
||||
to a predefined value baked into the code. The fuzzer relies on this signal
|
||||
to build compact "auto dictionaries" that are then used in conjunction with
|
||||
other fuzzing strategies.
|
||||
|
||||
## 8. De-duping crashes
|
||||
|
||||
De-duplication of crashes is one of the more important problems for any
|
||||
competent fuzzing tool. Many of the naive approaches run into problems; in
|
||||
particular, looking just at the faulting address may lead to completely
|
||||
unrelated issues being clustered together if the fault happens in a common
|
||||
library function (say, `strcmp`, `strcpy`); while checksumming call stack
|
||||
backtraces can lead to extreme crash count inflation if the fault can be
|
||||
reached through a number of different, possibly recursive code paths.
|
||||
|
||||
The solution implemented in `afl-fuzz` considers a crash unique if any of two
|
||||
conditions are met:
|
||||
|
||||
- The crash trace includes a tuple not seen in any of the previous crashes,
|
||||
- The crash trace is missing a tuple that was always present in earlier
|
||||
faults.
|
||||
|
||||
The approach is vulnerable to some path count inflation early on, but exhibits
|
||||
a very strong self-limiting effect, similar to the execution path analysis
|
||||
logic that is the cornerstone of `afl-fuzz`.
|
||||
|
||||
## 9. Investigating crashes
|
||||
|
||||
The exploitability of many types of crashes can be ambiguous; afl-fuzz tries
|
||||
to address this by providing a crash exploration mode where a known-faulting
|
||||
test case is fuzzed in a manner very similar to the normal operation of the
|
||||
fuzzer, but with a constraint that causes any non-crashing mutations to be
|
||||
thrown away.
|
||||
|
||||
A detailed discussion of the value of this approach can be found here:
|
||||
|
||||
http://lcamtuf.blogspot.com/2014/11/afl-fuzz-crash-exploration-mode.html
|
||||
|
||||
The method uses instrumentation feedback to explore the state of the crashing
|
||||
program to get past the ambiguous faulting condition and then isolate the
|
||||
newly-found inputs for human review.
|
||||
|
||||
On the subject of crashes, it is worth noting that in contrast to normal
|
||||
queue entries, crashing inputs are *not* trimmed; they are kept exactly as
|
||||
discovered to make it easier to compare them to the parent, non-crashing entry
|
||||
in the queue. That said, `afl-tmin` can be used to shrink them at will.
|
||||
|
||||
## 10 The fork server
|
||||
|
||||
To improve performance, `afl-fuzz` uses a "fork server", where the fuzzed process
|
||||
goes through `execve()`, linking, and libc initialization only once, and is then
|
||||
cloned from a stopped process image by leveraging copy-on-write. The
|
||||
implementation is described in more detail here:
|
||||
|
||||
http://lcamtuf.blogspot.com/2014/10/fuzzing-binaries-without-execve.html
|
||||
|
||||
The fork server is an integral aspect of the injected instrumentation and
|
||||
simply stops at the first instrumented function to await commands from
|
||||
`afl-fuzz`.
|
||||
|
||||
With fast targets, the fork server can offer considerable performance gains,
|
||||
usually between 1.5x and 2x. It is also possible to:
|
||||
|
||||
- Use the fork server in manual ("deferred") mode, skipping over larger,
|
||||
user-selected chunks of initialization code. It requires very modest
|
||||
code changes to the targeted program, and With some targets, can
|
||||
produce 10x+ performance gains.
|
||||
- Enable "persistent" mode, where a single process is used to try out
|
||||
multiple inputs, greatly limiting the overhead of repetitive `fork()`
|
||||
calls. This generally requires some code changes to the targeted program,
|
||||
but can improve the performance of fast targets by a factor of 5 or more - approximating the benefits of in-process fuzzing jobs while still
|
||||
maintaining very robust isolation between the fuzzer process and the
|
||||
targeted binary.
|
||||
|
||||
## 11. Parallelization
|
||||
|
||||
The parallelization mechanism relies on periodically examining the queues
|
||||
produced by independently-running instances on other CPU cores or on remote
|
||||
machines, and then selectively pulling in the test cases that, when tried
|
||||
out locally, produce behaviors not yet seen by the fuzzer at hand.
|
||||
|
||||
This allows for extreme flexibility in fuzzer setup, including running synced
|
||||
instances against different parsers of a common data format, often with
|
||||
synergistic effects.
|
||||
|
||||
For more information about this design, see parallel_fuzzing.md.
|
||||
|
||||
## 12. Binary-only instrumentation
|
||||
|
||||
Instrumentation of black-box, binary-only targets is accomplished with the
|
||||
help of a separately-built version of QEMU in "user emulation" mode. This also
|
||||
allows the execution of cross-architecture code - say, ARM binaries on x86.
|
||||
|
||||
QEMU uses basic blocks as translation units; the instrumentation is implemented
|
||||
on top of this and uses a model roughly analogous to the compile-time hooks:
|
||||
|
||||
```c
|
||||
if (block_address > elf_text_start && block_address < elf_text_end) {
|
||||
|
||||
cur_location = (block_address >> 4) ^ (block_address << 8);
|
||||
shared_mem[cur_location ^ prev_location]++;
|
||||
prev_location = cur_location >> 1;
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
The shift-and-XOR-based scrambling in the second line is used to mask the
|
||||
effects of instruction alignment.
|
||||
|
||||
The start-up of binary translators such as QEMU, DynamoRIO, and PIN is fairly
|
||||
slow; to counter this, the QEMU mode leverages a fork server similar to that
|
||||
used for compiler-instrumented code, effectively spawning copies of an
|
||||
already-initialized process paused at `_start`.
|
||||
|
||||
First-time translation of a new basic block also incurs substantial latency. To
|
||||
eliminate this problem, the AFL fork server is extended by providing a channel
|
||||
between the running emulator and the parent process. The channel is used
|
||||
to notify the parent about the addresses of any newly-encountered blocks and to
|
||||
add them to the translation cache that will be replicated for future child
|
||||
processes.
|
||||
|
||||
As a result of these two optimizations, the overhead of the QEMU mode is
|
||||
roughly 2-5x, compared to 100x+ for PIN.
|
||||
|
||||
## 13. The `afl-analyze` tool
|
||||
|
||||
The file format analyzer is a simple extension of the minimization algorithm
|
||||
discussed earlier on; instead of attempting to remove no-op blocks, the tool
|
||||
performs a series of walking byte flips and then annotates runs of bytes
|
||||
in the input file.
|
||||
|
||||
It uses the following classification scheme:
|
||||
|
||||
- "No-op blocks" - segments where bit flips cause no apparent changes to
|
||||
control flow. Common examples may be comment sections, pixel data within
|
||||
a bitmap file, etc.
|
||||
- "Superficial content" - segments where some, but not all, bitflips
|
||||
produce some control flow changes. Examples may include strings in rich
|
||||
documents (e.g., XML, RTF).
|
||||
- "Critical stream" - a sequence of bytes where all bit flips alter control
|
||||
flow in different but correlated ways. This may be compressed data,
|
||||
non-atomically compared keywords or magic values, etc.
|
||||
- "Suspected length field" - small, atomic integer that, when touched in
|
||||
any way, causes a consistent change to program control flow, suggestive
|
||||
of a failed length check.
|
||||
- "Suspected cksum or magic int" - an integer that behaves similarly to a
|
||||
length field, but has a numerical value that makes the length explanation
|
||||
unlikely. This is suggestive of a checksum or other "magic" integer.
|
||||
- "Suspected checksummed block" - a long block of data where any change
|
||||
always triggers the same new execution path. Likely caused by failing
|
||||
a checksum or a similar integrity check before any subsequent parsing
|
||||
takes place.
|
||||
- "Magic value section" - a generic token where changes cause the type
|
||||
of binary behavior outlined earlier, but that doesn't meet any of the
|
||||
other criteria. May be an atomically compared keyword or so.
|
64
docs/third_party_tools.md
Normal file
@ -0,0 +1,64 @@
|
||||
# Tools that help fuzzing with AFL++
|
||||
|
||||
## Speeding up fuzzing
|
||||
|
||||
* [libfiowrapper](https://github.com/marekzmyslowski/libfiowrapper) - if the
|
||||
function you want to fuzz requires loading a file, this allows using the
|
||||
shared memory test case feature :-) - recommended.
|
||||
|
||||
## Minimization of test cases
|
||||
|
||||
* [afl-pytmin](https://github.com/ilsani/afl-pytmin) - a wrapper for afl-tmin
|
||||
that tries to speed up the process of minimization of a single test case by
|
||||
using many CPU cores.
|
||||
* [afl-ddmin-mod](https://github.com/MarkusTeufelberger/afl-ddmin-mod) - a
|
||||
variation of afl-tmin based on the ddmin algorithm.
|
||||
* [halfempty](https://github.com/googleprojectzero/halfempty) - is a fast
|
||||
utility for minimizing test cases by Tavis Ormandy based on parallelization.
|
||||
|
||||
## Distributed execution
|
||||
|
||||
* [disfuzz-afl](https://github.com/MartijnB/disfuzz-afl) - distributed fuzzing
|
||||
for AFL.
|
||||
* [AFLDFF](https://github.com/quantumvm/AFLDFF) - AFL distributed fuzzing
|
||||
framework.
|
||||
* [afl-launch](https://github.com/bnagy/afl-launch) - a tool for the execution
|
||||
of many AFL instances.
|
||||
* [afl-mothership](https://github.com/afl-mothership/afl-mothership) -
|
||||
management and execution of many synchronized AFL fuzzers on AWS cloud.
|
||||
* [afl-in-the-cloud](https://github.com/abhisek/afl-in-the-cloud) - another
|
||||
script for running AFL in AWS.
|
||||
|
||||
## Deployment, management, monitoring, reporting
|
||||
|
||||
* [afl-utils](https://gitlab.com/rc0r/afl-utils) - a set of utilities for
|
||||
automatic processing/analysis of crashes and reducing the number of test
|
||||
cases.
|
||||
* [afl-other-arch](https://github.com/shellphish/afl-other-arch) - is a set of
|
||||
patches and scripts for easily adding support for various non-x86
|
||||
architectures for AFL.
|
||||
* [afl-trivia](https://github.com/bnagy/afl-trivia) - a few small scripts to
|
||||
simplify the management of AFL.
|
||||
* [afl-monitor](https://github.com/reflare/afl-monitor) - a script for
|
||||
monitoring AFL.
|
||||
* [afl-manager](https://github.com/zx1340/afl-manager) - a web server on Python
|
||||
for managing multi-afl.
|
||||
* [afl-remote](https://github.com/block8437/afl-remote) - a web server for the
|
||||
remote management of AFL instances.
|
||||
* [afl-extras](https://github.com/fekir/afl-extras) - shell scripts to
|
||||
parallelize afl-tmin, startup, and data collection.
|
||||
|
||||
## Crash processing
|
||||
|
||||
* [AFLTriage](https://github.com/quic/AFLTriage) -
|
||||
triage crashing input files using gdb.
|
||||
* [afl-crash-analyzer](https://github.com/floyd-fuh/afl-crash-analyzer) -
|
||||
another crash analyzer for AFL.
|
||||
* [fuzzer-utils](https://github.com/ThePatrickStar/fuzzer-utils) - a set of
|
||||
scripts for the analysis of results.
|
||||
* [atriage](https://github.com/Ayrx/atriage) - a simple triage tool.
|
||||
* [afl-kit](https://github.com/kcwu/afl-kit) - afl-cmin on Python.
|
||||
* [AFLize](https://github.com/d33tah/aflize) - a tool that automatically
|
||||
generates builds of debian packages suitable for AFL.
|
||||
* [afl-fid](https://github.com/FoRTE-Research/afl-fid) - a set of tools for
|
||||
working with input data.
|
45
docs/tutorials.md
Normal file
@ -0,0 +1,45 @@
|
||||
# Tutorials
|
||||
|
||||
If you are a total newbie, try this guide:
|
||||
|
||||
* [https://github.com/alex-maleno/Fuzzing-Module](https://github.com/alex-maleno/Fuzzing-Module)
|
||||
|
||||
Here are some good write-ups to show how to effectively use AFL++:
|
||||
|
||||
* [https://aflplus.plus/docs/tutorials/libxml2_tutorial/](https://aflplus.plus/docs/tutorials/libxml2_tutorial/)
|
||||
* [https://bananamafia.dev/post/gb-fuzz/](https://bananamafia.dev/post/gb-fuzz/)
|
||||
* [https://securitylab.github.com/research/fuzzing-challenges-solutions-1](https://securitylab.github.com/research/fuzzing-challenges-solutions-1)
|
||||
* [https://securitylab.github.com/research/fuzzing-software-2](https://securitylab.github.com/research/fuzzing-software-2)
|
||||
* [https://securitylab.github.com/research/fuzzing-sockets-FTP](https://securitylab.github.com/research/fuzzing-sockets-FTP)
|
||||
* [https://securitylab.github.com/research/fuzzing-sockets-FreeRDP](https://securitylab.github.com/research/fuzzing-sockets-FreeRDP)
|
||||
* [https://securitylab.github.com/research/fuzzing-apache-1](https://securitylab.github.com/research/fuzzing-apache-1)
|
||||
* [https://mmmds.pl/fuzzing-map-parser-part-1-teeworlds/](https://mmmds.pl/fuzzing-map-parser-part-1-teeworlds/)
|
||||
|
||||
If you do not want to follow a tutorial but rather try an exercise type of
|
||||
training, then we can highly recommend the following:
|
||||
|
||||
* [https://github.com/antonio-morales/Fuzzing101](https://github.com/antonio-morales/Fuzzing101)
|
||||
|
||||
If you are interested in fuzzing structured data (where you define what the
|
||||
structure is), these links have you covered (some are outdated though):
|
||||
|
||||
* libprotobuf for AFL++:
|
||||
[https://github.com/P1umer/AFLplusplus-protobuf-mutator](https://github.com/P1umer/AFLplusplus-protobuf-mutator)
|
||||
* libprotobuf raw:
|
||||
[https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator](https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator)
|
||||
* libprotobuf for old AFL++ API:
|
||||
[https://github.com/thebabush/afl-libprotobuf-mutator](https://github.com/thebabush/afl-libprotobuf-mutator)
|
||||
* Superion for AFL++:
|
||||
[https://github.com/adrian-rt/superion-mutator](https://github.com/adrian-rt/superion-mutator)
|
||||
|
||||
## Video Tutorials
|
||||
|
||||
* [Install AFL++ Ubuntu](https://www.youtube.com/watch?v=5dCvhkbi3RA)
|
||||
* [[Fuzzing with AFLplusplus] Installing AFLPlusplus and fuzzing a simple C program](https://www.youtube.com/watch?v=9wRVo0kYSlc)
|
||||
* [[Fuzzing with AFLplusplus] How to fuzz a binary with no source code on Linux in persistent mode](https://www.youtube.com/watch?v=LGPJdEO02p4)
|
||||
* [Blackbox Fuzzing #1: Start Binary-Only Fuzzing using AFL++ QEMU mode](https://www.youtube.com/watch?v=sjLFf9q2NRc)
|
||||
* [HOPE 2020 (2020): Hunting Bugs in Your Sleep - How to Fuzz (Almost) Anything With AFL/AFL++](https://www.youtube.com/watch?v=A8ex1hqaQ7E)
|
||||
* [How Fuzzing with AFL works!](https://www.youtube.com/watch?v=COHUWuLTbdk)
|
||||
* [WOOT '20 - AFL++ : Combining Incremental Steps of Fuzzing Research](https://www.youtube.com/watch?v=cZidm6I7KWU)
|
||||
|
||||
If you find other good ones, please send them to us :-)
|
@ -1 +0,0 @@
|
||||
() { _; } >_[$($())] { id; }
|
@ -1 +0,0 @@
|
||||
() { x() { _; }; x() { _; } <<a; }
|