Fuzzing could be summed up as a testing method feeding random inputs to a program. Where a more traditional approach to testing relies on manual design of tests based on known assumptions, fuzzing bring an automated mean of creating test cases. Although a single test generated by a fuzzer is unlikely to find any defaults, millions of them in quick iterations makes it very likely to trigger unexpected behaviours and crashes. With the rise of smarter fuzzers, fuzzing has become an efficient and reliable way to test most edge cases of a program and makes it possible to cover very large programs that would require otherwise a large amount of effort from manual reviewing and testing. The low amount of manual intervention required to setup a modern smart fuzzer dismiss any pretexts a developer or security research might have to not fuzz its project. If you aren't fuzzing, the bad guys will (and find all the bugs that comes with it).
This workshop aims to introduce the basic concepts of fuzzing to the participants and to enable them to make fuzzing a critical step of their testing process. The class is going to start with a quick introduction about the concepts of fuzzing, why they should do it and some benefits other organizations have gain from it. The workshop will then move on to a hands-on approach on how to set up AFL and run it against a program and how to interpret the outputs. Most of the exercise will turn around a sample program with intentional bugs and gotchas, and once the participants will have an understanding of the basis, they will be walked through real world scenarios. Finally, a time will be allocated at the end for the participants to fuzz a project of their choice with the assistance of the presenters.
Requirements: For a better experience participants must: - Bring their own laptops with a working Docker installation. Docker will be used to give a proper AFL working environment to all participants. No support will be provided for participants running AFL outside of the provided Docker image. We might be able to provide remote environments through ssh. In any case, it is likely to be slow and suboptimal to quickly find crashes with AFL. For a better experience we encourage participants to: - Have a basic knowledge of C and common C vulnerabilities (Buffer Overflow, Format String, etc). The workshop won’t cover the exploitation of found crashes, but it might be more helpful to understand why those crashes happen and what can be done from them. - Command-line knowledge, particularly how to build a program with gcc from the command-line interface.
I analyze existing vulnerabilities and upcoming 0days. Experienced with Android, Linux, and embedded/RTOS exploitation.
Jean-marc Le Blanc
Independent Security Researcher
Currently working as a reverse engineer, Jean-Marc has worked for multiple respected security enterprises for past 5 years. On top of his professional security research, he has done allot of personal vulnerability research on large popular applications. His most recent project has been the mruby bug...