Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Fuzzing

From Wikipedia, the free encyclopedia
(Redirected fromFuzzer)
Automated software testing technique

Inprogramming andsoftware development,fuzzing orfuzz testing is an automatedsoftware testing technique that involves providing invalid, unexpected, orrandom data as inputs to acomputer program. The program is then monitored for exceptions such ascrashes, failing built-in codeassertions, or potentialmemory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, such as in afile format orprotocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to exposecorner cases that have not been properly dealt with.

For the purpose of security, input that crosses atrust boundary is often the most useful.[1] For example, it is more important to fuzz code that handles a file uploaded by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.

History

[edit]

The term "fuzz" originates from a 1988 class project[2] in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at theUniversity of Wisconsin, whose results were subsequently published in 1990.[3][4] To fuzz test aUNIX utility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available.[5] This early fuzzing would now be called black box, generational, unstructured (dumb or "classic") fuzzing.

According to Prof. Barton Miller, "In the process of writing the project description, I needed to give this kind of testing a name. I wanted a name that would evoke the feeling of random, unstructured data. After trying out several ideas, I settled on the term fuzz."[4]

A key contribution of this early work was simple (almost simplistic) oracle. A program failed its test if it crashed or hung under the random input and was considered to have passed otherwise. While test oracles can be challenging to construct, the oracle for this early fuzz testing was simple and universal to apply.

In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure for security-critical components of theChromium web browser.[6] Security researchers can upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer.

In September 2014,Shellshock[7] was disclosed as a family ofsecurity bugs in the widely usedUNIXBashshell; most vulnerabilities of Shellshock were found using the fuzzerAFL.[8] (Many Internet-facing services, such as some web server deployments, use Bash to process certain requests, allowing an attacker to cause vulnerable versions of Bash toexecute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system.[9])

In April 2015, Hanno Böck showed how the fuzzer AFL could have found the 2014 Heartbleed vulnerability.[10][11] (TheHeartbleed vulnerability was disclosed in April 2014. It is a serious vulnerability that allows adversaries to decipher otherwiseencrypted communication. The vulnerability was accidentally introduced intoOpenSSL which implementsTLS and is used by the majority of the servers on the internet.Shodan reported 238,000 machines still vulnerable in April 2016;[12] 200,000 in January 2017.[13])

In August 2016, theDefense Advanced Research Projects Agency (DARPA) held the finals of the firstCyber Grand Challenge, a fully automatedcapture-the-flag competition that lasted 11 hours.[14] The objective was to develop automatic defense systems that can discover,exploit, andcorrect software flaws inreal-time. Fuzzing was used as an effective offense strategy to discover flaws in the software of the opponents. It showed tremendous potential in the automation of vulnerability detection. The winner was a system called "Mayhem"[15] developed by the team ForAllSecure led byDavid Brumley.

In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.[16]

In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.[17]

At Black Hat 2018, Christopher Domas demonstrated the use of fuzzing to expose the existence of a hiddenRISC core in a processor.[18] This core was able to bypass existing security checks to executeRing 0 commands from Ring 3.

In September 2020,Microsoft releasedOneFuzz, aself-hosted fuzzing-as-a-service platform that automates the detection ofsoftware bugs.[19] It supportsWindows and Linux.[20] It has been archived three years later on November 1st, 2023.[21]

Early random testing

[edit]

Testing programs with random inputs dates back to the 1950s when data was still stored onpunched cards.[22] Programmers would use punched cards that were pulled from the trash or card decks of random numbers as input to computer programs. If an execution revealed undesired behavior, abug had been detected.

The execution of random inputs is also calledrandom testing ormonkey testing.

In 1981, Duran and Ntafos formally investigated the effectiveness of testing a program with random inputs.[23][24] While random testing had been widely perceived to be the worst means of testing a program, the authors could show that it is a cost-effective alternative to more systematic testing techniques.

In 1983,Steve Capps at Apple developed "The Monkey",[25] a tool that would generate random inputs forclassic Mac OS applications, such asMacPaint.[26] The figurative "monkey" refers to theinfinite monkey theorem which states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will eventually type out the entire works of Shakespeare. In the case of testing, the monkey would write the particular sequence of inputs that would trigger a crash.

In 1991, the crashme tool was released, which was intended to test the robustness of Unix andUnix-likeoperating systems by randomly executing systems calls with randomly chosen parameters.[27]

Types

[edit]

A fuzzer can be categorized in several ways:[28][1]

  1. A fuzzer can be generation-based or mutation-based depending on whether inputs are generated from scratch or by modifying existing inputs.
  2. A fuzzer can be dumb (unstructured) or smart (structured) depending on whether it is aware of input structure.
  3. A fuzzer can be white-, grey-, or black-box, depending on whether it is aware of program structure.

Reuse of existing input seeds

[edit]

A mutation-based fuzzer leverages an existing corpus of seed inputs during fuzzing. It generates inputs by modifying (or rathermutating) the provided seeds.[29] For example, when fuzzing the image librarylibpng, the user would provide a set of validPNG image files as seeds while a mutation-based fuzzer would modify these seeds to produce semi-valid variants of each seed. The corpus of seed files may contain thousands of potentially similar inputs. Automated seed selection (or test suite reduction) allows users to pick the best seeds in order to maximize the total number of bugs found during a fuzz campaign.[30]

A generation-based fuzzer generates inputs from scratch. For instance, a smart generation-based fuzzer[31] takes the input model that was provided by the user to generate new inputs. Unlike mutation-based fuzzers, a generation-based fuzzer does not depend on the existence or quality of a corpus of seed inputs.

Some fuzzers have the capability to do both, to generate inputs from scratch and to generate inputs by mutation of existing seeds.[32]

Aware of input structure

[edit]

Typically, fuzzers are used to generate inputs for programs that take structured inputs, such as afile, a sequence of keyboard or mouseevents, or a sequence ofmessages. This structure distinguishes valid input that is accepted and processed by the program from invalid input that is quickly rejected by the program. What constitutes a valid input may be explicitly specified in an input model. Examples of input models areformal grammars,file formats,GUI-models, andnetwork protocols. Even items not normally considered as input can be fuzzed, such as the contents ofdatabases,shared memory,environment variables or the precise interleaving ofthreads. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from theparser and "invalid enough" so that they might stresscorner cases and exercise interesting program behaviours.

A smart (model-based,[32] grammar-based,[31][33] or protocol-based[34]) fuzzer leverages the input model to generate a greater proportion of valid inputs. For instance, if the input can be modelled as anabstract syntax tree, then a smart mutation-based fuzzer[33] would employ randomtransformations to move complete subtrees from one node to another. If the input can be modelled by aformal grammar, a smart generation-based fuzzer[31] would instantiate theproduction rules to generate inputs that are valid with respect to the grammar. However, generally the input model must be explicitly provided, which is difficult to do when the model is proprietary, unknown, or very complex. If a large corpus of valid and invalid inputs is available, agrammar induction technique, such asAngluin's L* algorithm, would be able to generate an input model.[35][36]

A dumb fuzzer[37][38] does not require the input model and can thus be employed to fuzz a wider variety of programs. For instance,AFL is a dumb mutation-based fuzzer that modifies a seed file byflipping random bits, by substituting random bytes with "interesting" values, and by moving or deleting blocks of data. However, a dumb fuzzer might generate a lower proportion of valid inputs and stress theparser code rather than the main components of a program. The disadvantage of dumb fuzzers can be illustrated by means of the construction of a validchecksum for acyclic redundancy check (CRC). A CRC is anerror-detecting code that ensures that theintegrity of the data contained in the input file is preserved duringtransmission. A checksum is computed over the input data and recorded in the file. When the program processes the received file and the recorded checksum does not match the re-computed checksum, then the file is rejected as invalid. Now, a fuzzer that is unaware of the CRC is unlikely to generate the correct checksum. However, there are attempts to identify and re-compute a potential checksum in the mutated input, once a dumb mutation-based fuzzer has modified the protected data.[39]

Aware of program structure

[edit]

Typically, a fuzzer is considered more effective if it achieves a higher degree ofcode coverage. The rationale is, if a fuzzer does not exercise certain structural elements in the program, then it is also not able to revealbugs that are hiding in these elements. Some program elements are considered more critical than others. For instance, a division operator might cause adivision by zero error, or asystem call may crash the program.

Ablack-box fuzzer[37][33] treats the program as ablack box and is unaware of internal program structure. For instance, arandom testing tool that generates inputs at random is considered a blackbox fuzzer. Hence, a blackbox fuzzer can execute several hundred inputs per second, can be easily parallelized, and can scale to programs of arbitrary size. However, blackbox fuzzers may only scratch the surface and expose "shallow" bugs. Hence, there are attempts to develop blackbox fuzzers that can incrementally learn about the internal structure (and behavior) of a program during fuzzing by observing the program's output given an input. For instance, LearnLib employsactive learning to generate anautomaton that represents the behavior of a web application.

Awhite-box fuzzer[38][32] leveragesprogram analysis to systematically increasecode coverage or to reach certain critical program locations. For instance, SAGE[40] leveragessymbolic execution to systematically explore different paths in the program (a technique known asconcolic execution).If theprogram's specification is available, a whitebox fuzzer might leverage techniques frommodel-based testing to generate inputs and check the program outputs against the program specification.A whitebox fuzzer can be very effective at exposing bugs that hide deep in the program. However, the time used for analysis (of the program or its specification) can become prohibitive. If the whitebox fuzzer takes relatively too long to generate an input, a blackbox fuzzer will be more efficient.[41] Hence, there are attempts to combine the efficiency of blackbox fuzzers and the effectiveness of whitebox fuzzers.[42]

Agray-box fuzzer leveragesinstrumentation rather than program analysis to glean information about the program. For instance, AFL and libFuzzer utilize lightweight instrumentation to tracebasic block transitions exercised by an input. This leads to a reasonable performance overhead but informs the fuzzer about the increase in code coverage during fuzzing, which makes gray-box fuzzers extremely efficient vulnerability detection tools.[43]

Uses

[edit]

Fuzzing is used mostly as an automated technique to exposevulnerabilities in security-critical programs that might beexploited with malicious intent.[6][16][17] More generally, fuzzing is used to demonstrate the presence of bugs rather than their absence. Running a fuzzing campaign for several weeks without finding a bug does not prove the program correct.[44] After all, the program may still fail for an input that has not been executed, yet; executing a program for all inputs is prohibitively expensive. If the objective is to prove a program correct for all inputs, aformal specification must exist and techniques fromformal methods must be used.

Exposing bugs

[edit]

In order to expose bugs, a fuzzer must be able to distinguish expected (normal) from unexpected (buggy) program behavior. However, a machine cannot always distinguish a bug from a feature. In automatedsoftware testing, this is also called thetest oracle problem.[45][46]

Typically, a fuzzer distinguishes between crashing and non-crashing inputs in the absence ofspecifications and to use a simple and objective measure.Crashes can be easily identified and might indicate potential vulnerabilities (e.g.,denial of service orarbitrary code execution). However, the absence of a crash does not indicate the absence of a vulnerability. For instance, a program written inC may or may not crash when an input causes abuffer overflow. Rather the program's behavior isundefined.

To make a fuzzer more sensitive to failures other than crashes, sanitizers can be used to inject assertions that crash the program when a failure is detected.[47][48] There are different sanitizers for different kinds of bugs:

Fuzzing can also be used to detect "differential" bugs if areference implementation is available. For automatedregression testing,[49] the generated inputs are executed on twoversions of the same program. For automateddifferential testing,[50] the generated inputs are executed on two implementations of the same program (e.g.,lighttpd andhttpd are both implementations of a web server). If the two variants produce different output for the same input, then one may be buggy and should be examined more closely.

Validating static analysis reports

[edit]

Static program analysis analyzes a program without actually executing it. This might lead tofalse positives where the tool reports problems with the program that do not actually exist. Fuzzing in combination withdynamic program analysis can be used to try to generate an input that actually witnesses the reported problem.[51]

Browser security

[edit]

Modern web browsers undergo extensive fuzzing. TheChromium code ofGoogle Chrome is continuously fuzzed by the Chrome Security Team with 15,000 cores.[52] ForMicrosoft Edge [Legacy] andInternet Explorer,Microsoft performed fuzzed testing with 670 machine-years during product development, generating more than 400 billionDOM manipulations from 1 billion HTML files.[53][52]

Toolchain

[edit]

A fuzzer produces a large number of inputs in a relatively short time. For instance, in 2016 the Google OSS-fuzz project produced around 4trillion inputs a week.[17] Hence, many fuzzers provide atoolchain that automates otherwise manual and tedious tasks which follow the automated generation of failure-inducing inputs.

Automated bug triage

[edit]
Main article:Bug triage

Automated bug triage is used to group a large number of failure-inducing inputs byroot cause and to prioritize each individual bug by severity. A fuzzer produces a large number of inputs, and many of the failure-inducing ones may effectively expose the samesoftware bug. Only some of these bugs aresecurity-critical and should bepatched with higher priority. For instance theCERT Coordination Center provides the Linux triage tools which group crashing inputs by the producedstack trace and lists each group according to their probability to beexploitable.[54] The Microsoft Security Research Centre (MSEC) developed the "!exploitable" tool which first creates ahash for a crashing input to determine its uniqueness and then assigns an exploitability rating:[55]

  • Exploitable
  • Probably Exploitable
  • Probably Not Exploitable, or
  • Unknown.

Previously unreported, triaged bugs might be automaticallyreported to abug tracking system. For instance, OSS-Fuzz runs large-scale, long-running fuzzing campaigns for several security-critical software projects where each previously unreported, distinct bug is reported directly to a bug tracker.[17] The OSS-Fuzz bug tracker automatically informs themaintainer of the vulnerable software and checks in regular intervals whether the bug has been fixed in the most recentrevision using the uploaded minimized failure-inducing input.

Automated input minimization

[edit]

Automated input minimization (or test case reduction) is an automateddebugging technique to isolate that part of the failure-inducing input that is actually inducing the failure.[56][57] If the failure-inducing input is large and mostly malformed, it might be difficult for a developer to understand what exactly is causing the bug. Given the failure-inducing input, an automated minimization tool would remove as many input bytes as possible while still reproducing the original bug. For instance,Delta Debugging is an automated input minimization technique that employs an extendedbinary search algorithm to find such a minimal input.[58]

List of popular fuzzers

[edit]
See also:American fuzzy lop (fuzzer) § Forks
[icon]
This sectionneeds expansion. You can help byadding to it.(February 2023)

The following is a list of fuzzers described as "popular", "widely used", or similar in the academic literature.[59][60]

NameWhite/gray/black-boxSmart/dumbDescriptionWritten inLicense
AFL[61][62]GrayDumbCApache 2.0
AFL++[63]GrayDumbCApache 2.0
AFLFast[64]GrayDumbCApache 2.0
Angora[65]GrayDumbC++Apache 2.0
honggfuzz[66][67]GrayDumbCApache 2.0
QSYM[68][?][?][?][?]
SymCC[69]White[70][?]C++GPL, LGPL
T-Fuzz[71][?][?][?][?]
VUzzer[72][?][?][?][?]

See also

[edit]

References

[edit]
  1. ^abJohn Neystadt (February 2008)."Automated Penetration Testing with White-Box Fuzzing". Microsoft. Retrieved2009-05-14.
  2. ^Barton P. Miller (September 1988)."Fall 1988 CS736 Project List"(PDF). Computer Sciences Department, University of Wisconsin-Madison. Retrieved2020-12-30.
  3. ^Barton P. Miller; Lars Fredriksen; Bryan So (December 1990)."An Empirical Study of the Reliability of UNIX Utilities".Communications of the ACM.33 (11):32–44.doi:10.1145/96267.96279.S2CID 14313707.
  4. ^abMiller, Barton (April 2008)."Foreword for Fuzz Testing Book".UW-Madison Computer Sciences. Retrieved29 March 2024.
  5. ^"Fuzz Testing of Application Reliability". University of Wisconsin-Madison. Retrieved2020-12-30.
  6. ^ab"Announcing ClusterFuzz". Retrieved2017-03-09.
  7. ^Perlroth, Nicole (25 September 2014)."Security Experts Expect 'Shellshock' Software Bug in Bash to Be Significant".The New York Times. Retrieved25 September 2014.
  8. ^Zalewski, Michał (1 October 2014)."Bash bug: the other two RCEs, or how we chipped away at the original fix (CVE-2014-6277 and '78)".lcamtuf's blog. Retrieved13 March 2017.
  9. ^Seltzer, Larry (29 September 2014)."Shellshock makes Heartbleed look insignificant".ZDNet. Retrieved29 September 2014.
  10. ^Böck, Hanno."Fuzzing: Wie man Heartbleed hätte finden können (in German)".Golem.de (in German). Retrieved13 March 2017.
  11. ^Böck, Hanno."How Heartbleed could've been found (in English)".Hanno's blog. Retrieved13 March 2017.
  12. ^"Search engine for the internet of things – devices still vulnerable to Heartbleed".shodan.io. Retrieved13 March 2017.
  13. ^"Heartbleed Report (2017-01)".shodan.io. Archived fromthe original on 23 January 2017. Retrieved10 July 2017.
  14. ^Walker, Michael."DARPA Cyber Grand Challenge".darpa.mil. Retrieved12 March 2017.
  15. ^"Mayhem comes in first place at CGC". Retrieved12 March 2017.
  16. ^ab"Announcing Project Springfield". 2016-09-26. Retrieved2017-03-08.
  17. ^abcd"Announcing OSS-Fuzz". Retrieved2017-03-08.
  18. ^Christopher Domas (August 2018)."GOD MODE UNLOCKED - Hardware Backdoors in x86 CPUs". Retrieved2018-09-03.
  19. ^"Microsoft: Windows 10 is hardened with these fuzzing security tools – now they're open source".ZDNet. September 15, 2020.
  20. ^"Microsoft open-sources fuzzing test framework".InfoWorld. September 17, 2020.
  21. ^microsoft/onefuzz, Microsoft, 2024-03-03, retrieved2024-03-06
  22. ^Gerald M. Weinberg (2017-02-05)."Fuzz Testing and Fuzz History". Retrieved2017-02-06.
  23. ^Joe W. Duran; Simeon C. Ntafos (1981-03-09).A report on random testing. Icse '81. Proceedings of the ACM SIGSOFT International Conference on Software Engineering (ICSE'81). pp. 179–183.ISBN 9780897911467.
  24. ^Joe W. Duran; Simeon C. Ntafos (1984-07-01). "An Evaluation of Random Testing".IEEE Transactions on Software Engineering (4). IEEE Transactions on Software Engineering (TSE):438–444.doi:10.1109/TSE.1984.5010257.S2CID 17208399.
  25. ^Andy Hertzfeld (2004).Revolution in the Valley:The Insanely Great Story of How the Mac Was Made?. O'Reily Press.ISBN 978-0596007195.
  26. ^"Macintosh Stories: Monkey Lives". Folklore.org. 1999-02-22. Retrieved2010-05-28.
  27. ^"crashme".CodePlex. Retrieved2021-05-21.
  28. ^Michael Sutton; Adam Greene; Pedram Amini (2007).Fuzzing: Brute Force Vulnerability Discovery. Addison-Wesley.ISBN 978-0-321-44611-4.
  29. ^Offutt, Jeff; Xu, Wuzhi (2004)."Generating Test Cases for Web Services Using Data Perturbation".Workshop on Testing, Analysis and Verification of Web Services.29 (5):1–10.doi:10.1145/1022494.1022529.S2CID 52854851.
  30. ^Rebert, Alexandre; Cha, Sang Kil; Avgerinos, Thanassis; Foote, Jonathan; Warren, David; Grieco, Gustavo; Brumley, David (2014)."Optimizing Seed Selection for Fuzzing"(PDF).Proceedings of the 23rd USENIX Conference on Security Symposium:861–875.
  31. ^abcPatrice Godefroid; Adam Kiezun; Michael Y. Levin."Grammar-based Whitebox Fuzzing"(PDF). Microsoft Research.
  32. ^abcVan-Thuan Pham; Marcel Böhme; Abhik Roychoudhury (2016-09-07). "Model-based whitebox fuzzing for program binaries".Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering - ASE 2016. Proceedings of Automated Software Engineering (ASE'16). pp. 543–553.doi:10.1145/2970276.2970316.ISBN 9781450338455.S2CID 5809364.
  33. ^abc"Peach Fuzzer". Retrieved2017-03-08.
  34. ^Greg Banks; Marco Cova; Viktoria Felmetsger;Kevin Almeroth; Richard Kemmerer; Giovanni Vigna.SNOOZE: Toward a Stateful NetwOrk prOtocol fuzZEr. Proceedings of the Information Security Conference (ISC'06).
  35. ^Osbert Bastani; Rahul Sharma; Alex Aiken; Percy Liang (June 2017).Synthesizing Program Input Grammars. Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2017).arXiv:1608.01723.Bibcode:2016arXiv160801723B.
  36. ^"VDA Labs - Evolutionary Fuzzing System". Archived fromthe original on 2015-11-05. Retrieved2009-05-14.
  37. ^abAri Takanen; Jared D. Demott; Charles Miller (31 January 2018).Fuzzing for Software Security Testing and Quality Assurance, Second Edition. Artech House. p. 15.ISBN 978-1-63081-519-6.full document available (archived September 19, 2018)
  38. ^abVijay Ganesh; Tim Leek; Martin Rinard (2009-05-16)."Taint-based directed whitebox fuzzing".IEEE. Proceedings of the ACM SIGSOFT International Conference on Software Engineering (ICSE'09).
  39. ^Wang, T.; Wei, T.; Gu, G.; Zou, W. (May 2010). "TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection".2010 IEEE Symposium on Security and Privacy. pp. 497–512.CiteSeerX 10.1.1.169.7866.doi:10.1109/SP.2010.37.ISBN 978-1-4244-6894-2.S2CID 11898088.
  40. ^Patrice Godefroid; Michael Y. Levin; David Molnar (2008-02-08)."Automated Whitebox Fuzz Testing"(PDF). Proceedings of Network and Distributed Systems Symposium (NDSS'08).
  41. ^Marcel Böhme; Soumya Paul (2015-10-05). "A Probabilistic Analysis of the Efficiency of Automated Software Testing".IEEE Transactions on Software Engineering.42 (4):345–360.doi:10.1109/TSE.2015.2487274.S2CID 15927031.
  42. ^Nick Stephens; John Grosen; Christopher Salls; Andrew Dutcher; Ruoyu Wang; Jacopo Corbetta; Yan Shoshitaishvili; Christopher Kruegel; Giovanni Vigna (2016-02-24).Driller: Augmenting. Fuzzing Through Selective Symbolic Execution(PDF). Proceedings of Network and Distributed Systems Symposium (NDSS'16).
  43. ^Marcel Böhme; Van-Thuan Pham; Abhik Roychoudhury (2016-10-28). "Coverage-based Greybox Fuzzing as Markov Chain".Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. Proceedings of the ACM Conference on Computer and Communications Security (CCS'16). pp. 1032–1043.doi:10.1145/2976749.2978428.ISBN 9781450341394.S2CID 3344888.
  44. ^Hamlet, Richard G.; Taylor, Ross (December 1990). "Partition testing does not inspire confidence".IEEE Transactions on Software Engineering.16 (12):1402–1411.doi:10.1109/32.62448.
  45. ^Weyuker, Elaine J. (1 November 1982)."On Testing Non-Testable Programs".The Computer Journal.25 (4):465–470.doi:10.1093/comjnl/25.4.465.
  46. ^Barr, Earl T.; Harman, Mark; McMinn, Phil; Shahbaz, Muzammil; Yoo, Shin (1 May 2015)."The Oracle Problem in Software Testing: A Survey"(PDF).IEEE Transactions on Software Engineering.41 (5):507–525.doi:10.1109/TSE.2014.2372785.S2CID 7165993.
  47. ^"Clang compiler documentation".clang.llvm.org. Retrieved13 March 2017.
  48. ^"GNU GCC sanitizer options".gcc.gnu.org. Retrieved13 March 2017.
  49. ^Orso, Alessandro; Xie, Tao (2008). "BERT".Proceedings of the 2008 international workshop on dynamic analysis: Held in conjunction with the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2008). ACM. pp. 36–42.doi:10.1145/1401827.1401835.ISBN 9781605580548.S2CID 7506576.
  50. ^McKeeman, William M. (1998)."Differential Testing for Software"(PDF).Digital Technical Journal.10 (1):100–107. Archived fromthe original(PDF) on 2006-10-31.
  51. ^Babić, Domagoj; Martignoni, Lorenzo; McCamant, Stephen; Song, Dawn (2011). "Statically-directed dynamic automated test generation".Proceedings of the 2011 International Symposium on Software Testing and Analysis. ACM. pp. 12–22.doi:10.1145/2001420.2001423.ISBN 9781450305624.S2CID 17344927.
  52. ^abSesterhenn, Eric; Wever, Berend-Jan; Orrù, Michele; Vervier, Markus (19 Sep 2017)."Browser Security WhitePaper"(PDF). X41D SEC GmbH.
  53. ^"Security enhancements for Microsoft Edge (Microsoft Edge for IT Pros)".Microsoft. 15 Oct 2017. Retrieved31 August 2018.
  54. ^"CERT Triage Tools".CERT Division of the Software Engineering Institute (SEI) at Carnegie Mellon University (CMU). Retrieved14 March 2017.
  55. ^"Microsoft !exploitable Crash Analyzer".CodePlex. Retrieved14 March 2017.
  56. ^"Test Case Reduction". 2011-07-18.
  57. ^"IBM Test Case Reduction Techniques". 2011-07-18. Archived fromthe original on 2016-01-10. Retrieved2011-07-18.
  58. ^Zeller, Andreas; Hildebrandt, Ralf (February 2002)."Simplifying and Isolating Failure-Inducing Input".IEEE Transactions on Software Engineering.28 (2):183–200.CiteSeerX 10.1.1.180.3357.doi:10.1109/32.988498.ISSN 0098-5589. Retrieved14 March 2017.
  59. ^Hazimeh, Ahmad; Herrera, Adrian; Payer, Mathias (2021-06-15)."Magma: A Ground-Truth Fuzzing Benchmark".Proceedings of the ACM on Measurement and Analysis of Computing Systems.4 (3): 49:1–49:29.arXiv:2009.01120.doi:10.1145/3428334.S2CID 227230949.
  60. ^Li, Yuwei; Ji, Shouling; Chen, Yuan; Liang, Sizhuang; Lee, Wei-Han; Chen, Yueyao; Lyu, Chenyang; Wu, Chunming; Beyah, Raheem; Cheng, Peng; Lu, Kangjie; Wang, Ting (2021).{UNIFUZZ}: A Holistic and Pragmatic {Metrics-Driven} Platform for Evaluating Fuzzers. pp. 2777–2794.ISBN 978-1-939133-24-3.
  61. ^Hazimeh, Herrera & Payer 2021, p. 1: "We evaluate seven widely-used mutation-based fuzzers (AFL, ...)".
  62. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, ...".
  63. ^Hazimeh, Herrera & Payer 2021, p. 1: "We evaluate seven widely-used mutation-based fuzzers (..., AFL++, ...)".
  64. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, AFLFast, ...".
  65. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, ..., Angora, ...".
  66. ^Hazimeh, Herrera & Payer 2021, p. 1: "We evaluate seven widely-used mutation-based fuzzers (..., honggfuzz, ...)".
  67. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, ..., Honggfuzz, ...".
  68. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, ..., QSYM, ...".
  69. ^Hazimeh, Herrera & Payer 2021, p. 1: "We evaluate seven widely-used mutation-based fuzzers (..., and SymCC-AFL)".
  70. ^Hazimeh, Herrera & Payer 2021, p. 14.
  71. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, ..., T-Fuzz, ...".
  72. ^Li et al. 2021, p. 1: "Using UniFuzz, we conduct in-depth evaluations of several prominent fuzzers including AFL, ..., and VUzzer64.".

Further reading

[edit]

External links

[edit]
Related security categories
vectorial version
vectorial version
Threats
Defenses
The "box" approach
Testing levels
Testing types, techniques,
andtactics
See also
Retrieved from "https://en.wikipedia.org/w/index.php?title=Fuzzing&oldid=1270167521"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp