Recently, OSS-Fuzz reported 26 new vulnerabilities to open source project maintainers, including one vulnerability in the critical OpenSSL library (CVE-2024-9143) that underpins much of internet infrastructure. The reports themselves aren’t unusual—we’ve reported and helped maintainers fix over 11,000 vulnerabilities in the 8 years of the project.
But these particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets. The OpenSSL CVE is one of the first vulnerabilities in a critical piece of software that was discovered by LLMs, adding another real-world example to a recent Google discovery of an exploitable stack buffer underflow in the widely used database engine SQLite.
This blog post discusses the results and lessons over a year and a half of work to bring AI-powered fuzzing to this point, both in introducing AI into fuzz target generation and expanding this to simulate a developer’s workflow. These efforts continue our explorations of how AI can transform vulnerability discovery and strengthen the arsenal of defenders everywhere.
The story so far
In August 2023, the OSS-Fuzz team announced AI-Powered Fuzzing, describing our effort to leverage large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically—before malicious attackers could exploit them. Our approach was to use the coding abilities of an LLM to generate more fuzz targets, which are similar to unit tests that exercise relevant functionality to search for vulnerabilities.
The ideal solution would be to completely automate the manual process of developing a fuzz target end to end:
Drafting an initial fuzz target.
Fixing any compilation issues that arise.
Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.
Running the corrected fuzz target for a longer period of time, and triaging any crashes to determine the root cause.
In January 2024, we open sourced the framework that we were building to enable an LLM to generate fuzz targets. By that point, LLMs were reliably generating targets that exercised more interesting code coverage across 160 projects. But there was still a long tail of projects where we couldn’t get a single working AI-generated fuzz target.
To address this, we’ve been improving the first two steps, as well as implementing steps 3 and 4.
New results: More code coverage and discovered vulnerabilities
We’re now able to automatically gain more coverage in 272 C/C++ projects on OSS-Fuzz (up from 160), adding 370k+ lines of new code coverage. The top coverage improvement in a single project was an increase from 77 lines to 5434 lines (a 7000% increase).
This led to the discovery of 26 new vulnerabilities in projects on OSS-Fuzz that already had hundreds of thousands of hours of fuzzing. The highlight is CVE-2024-9143 in the critical and well-tested OpenSSL library. We reported this vulnerability on September 16 and a fix was published on October 16. As far as we can tell, this vulnerability has likely been present for two decades and wouldn’t have been discoverable with existing fuzz targets written by humans.
Another example was a bug in the project cJSON, where even though an existing human-written harness existed to fuzz a specific function, we still discovered a new vulnerability in that same function with an AI-generated target.
One reason that such bugs could remain undiscovered for so long is that line coverage is not a guarantee that a function is free of bugs. Code coverage as a metric isn’t able to measure all possible code paths and states—different flags and configurations may trigger different behaviors, unearthing different bugs. These examples underscore the need to continue to generate new varieties of fuzz targets even for code that is already fuzzed, as has also been shown by Project Zero in the past (1, 2).
New improvements
To achieve these results, we’ve been focusing on two major improvements:
Automatically generate more relevant context in our prompts. The more complete and relevant information we can provide the LLM about a project, the less likely it would be to hallucinate the missing details in its response. This meant providing more accurate, project-specific context in prompts, such as function, type definitions, cross references, and existing unit tests for each project. To generate this information automatically, we built new infrastructure to index projects across OSS-Fuzz.
LLMs turned out to be highly effective at emulating a typical developer’s entire workflow of writing, testing, and iterating on the fuzz target, as well as triaging the crashes found. Thanks to this, it was possible to further automate more parts of the fuzzing workflow. This additional iterative feedback in turn also resulted in higher quality and greater number of correct fuzz targets.
The workflow in action
Our LLM can now execute the first four steps of the developer’s process (with the fifth soon to come).
1. Drafting an initial fuzz target
A developer might check the source code, existing documentation and unit tests, as well as usages of the target function when to draft an initial fuzz target. An LLM can fulfill this role here, if we provide a prompt with this information and ask it to come up with a fuzz target.
Prompt:
Your goal is to write a fuzzing harness for the provided function-under-test signature using <code>LLVMFuzzerTestOneInput</code>. It is important that the provided solution compiles and actually calls the function-under-test specified by the function signature:
li_base64_dec(result, reserve, in, in_length, charset);
buffer_commit(out, out_pos);
return (out_pos || !in_length) ? result : NULL;
}
</code>
Here is the source code for functions which reference the function being tested:
<code>
…
2. Fixing any compilation issues that arise.
Once a developer has a candidate target, they would try to compile it and look at any compilation issues that arise. Again, we can prompt an LLM with details of the compilation errors so it can provide fixes.
Prompt:
Below is the error to fix:
The code has the following build issues:
<error>
server_fuzzer.cc:12:46: error: only virtual member functions can be marked 'override'
server_fuzzer.cc:37:5: error: use of undeclared 'operator()'
37 | operator()(request_data.data(), receiver);
| ^
server_fuzzer.cc:27:23: warning: private field 'provider_' is not used [-Wunused-private-field]
27 | FuzzedDataProvider& provider_;
| ^
</error>
Example of compilation errors that an LLM was able to fix
3. Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.
Once all compilation errors are fixed, a developer would try running the fuzz target for a short period of time to see if there were any mistakes that led it to instantly crash, suggesting an error with the target rather than a bug discovered in the project.
The following is an example of an LLM fixing a semantic issue with the fuzzing setup:
LLM Response:
<explanation>
The issue is that `sqlite3_blob_open` requires the table to exist before you can open a blob on it. The fix ensures that the table exists by executing a `CREATE TABLE IF NOT EXISTS` statement before calling `sqlite3_blob_open`.
Here's a breakdown of the changes:
1. **Create Table Query:** A string `createTableQuery` is constructed to hold the SQL statement for creating the table. It uses the fuzzed values for the table and column names.
2. **Prepare Statement:** `sqlite3_prepare_v2` is used to prepare the SQL statement from `createTableQuery`. This compiles the SQL into a byte-code program that SQLite can execute.
3. **Execute Statement:** `sqlite3_step` executes the prepared statement. Since it's a `CREATE TABLE` statement, it doesn't return any data, but it creates the table if it doesn't exist.
4. **Finalize Statement:** `sqlite3_finalize` destroys the prepared statement object when it's no longer needed.
By adding these steps before calling `sqlite3_blob_open`, we ensure that the table exists, preventing the null pointer access issue.
</explanation>
4. Running the corrected fuzz target for a longer period of time, and triaging any crashes.
At this point, the fuzz target is ready to run for an extended period of time on a suitable fuzzing infrastructure, such as ClusterFuzz.
Any discovered crashes would then need to be triaged, to determine the root causes and whether they represented legitimate vulnerabilities (or bugs in the fuzz target). An LLM can be prompted with the relevant context (stacktraces, fuzz target source code, relevant project source code) to perform this triage.
Prompt: Given the following crash report, fuzz driver code and relevant project function code, analyze the cause of the crash.
First, only answer “Crash is caused by bug in fuzz driver” or “Crash is caused by bug in project”. Second, offer a succinct and to-the-point analysis.
Below is crash report:
<log>
AddressSanitizer: heap-use-after-free on address 0x502000000630 at pc 0x560036939242 bp 0x7ffd65dca850 sp 0x7ffd65dca010
READ of size 1 at 0x502000000630 thread T0
SCARINESS: 40 (1-byte-read-heap-use-after-free)
#0 0x560036939241 in __asan_memcpy /src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:63:3
#1 0x56003697d30b in EmitLiteral<false> /src/snappy/snappy.cc:661:5
In this example, the LLM correctly determines this is a bug in the fuzz target, rather than a bug in the project being fuzzed.
LLM Response: Crash is caused by bug in fuzz driver.
The fuzz driver creates a vector of `iovec` structures, each pointing to a chunk of memory owned by the `FuzzedDataProvider`. However, the `data_chunk` vector, which holds the actual data pointed to by `iov_base`, is deallocated at the end of each fuzzer iteration when it goes out of scope. This leaves the `iov_base` pointers dangling. When `snappy::CompressFromIOVec` tries to access the memory pointed to by these dangling pointers, a heap-use-after-free occurs.
5. Fixing vulnerabilities.
The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability. We don’t have anything we can share here today, but we’re collaborating with various researchers to make this a reality and look forward to sharing results soon.
Up next
Improving automated triaging: to get to a point where we’re confident about not requiring human review. This will help automatically report new vulnerabilities to project maintainers. There are likely more than the 26 vulnerabilities we’ve already reported upstream hiding in our results.
Agent-based architecture: which means letting the LLM autonomously plan out the steps to solve a particular problem by providing it with access to tools that enable it to get more information, as well as to check and validate results. By providing LLM with interactive access to real tools such as debuggers, we’ve found that the LLM is more likely to arrive at a correct result.
Integrating our research into OSS-Fuzz as a feature: to achieve a more fully automated end-to-end solution for vulnerability discovery and patching. We hope OSS-Fuzz will be useful for other researchers to evaluate AI-powered vulnerability discovery ideas and ultimately become a tool that will enable defenders to find more vulnerabilities before they get exploited.
For more information, check out our open source framework at oss-fuzz-gen. We’re hoping to continue to collaborate on this area with other researchers. Also, be sure to check out the OSS-Fuzz blog for more technical updates.
However, this transition will take multiple years as we adapt our development practices and infrastructure. Ensuring the safety of our billions of users therefore requires us to go further: we're also retrofitting secure-by-design principles to our existing C++ codebase wherever possible.
To that end, we're working towards bringing spatial memory safety into as many of our C++ codebases as possible, including Chrome and the monolithic codebase powering our services.
We’ve begun by enabling hardened libc++, which adds bounds checking to standard C++ data structures, eliminating a significant class of spatial safety bugs. While C++ will not become fully memory-safe, these improvements reduce risk as discussed in more detail in our perspective on memory safety, leading to more reliable and secure software.
This post explains how we're retrofitting hardened libc++ across our codebases and showcases the positive impact it's already having, including preventing exploits, reducing crashes, and improving code correctness.
Bounds-checked data structures: The foundation for spatial safety
One of our primary strategies for improving spatial safety in C++ is to implement bounds checking for common data structures, starting with hardening the C++ standard library (in our case, LLVM’s libc++). Hardened libc++, recently added by open source contributors, introduces a set of security checks designed to catch vulnerabilities such as out-of-bounds accesses in production.
For example, hardened libc++ ensures that every access to an element of a std::vector stays within its allocated bounds, preventing attempts to read or write beyond the valid memory region. Similarly, hardened libc++ checks that a std::optional isn't empty before allowing access, preventing access to uninitialized memory.
This approach mirrors what's already standard practice in many modern programming languages like Java, Python, Go, and Rust. They all incorporate bounds checking by default, recognizing its crucial role in preventing memory errors. C++ has been a notable exception, but efforts like hardened libc++ aim to close this gap in our infrastructure. It’s also worth noting that similar hardening is available in other C++ standard libraries, such as libstdc++.
Raising the security baseline across the board
Building on the successful deployment of hardened libc++ in Chrome in 2022, we've now made it default across our server-side production systems. This improves spatial memory safety across our services, including key performance-critical components of products like Search, Gmail, Drive, YouTube, and Maps. While a very small number of components remain opted out, we're actively working to reduce this and raise the bar for security across the board, even in applications with lower exploitation risk.
The performance impact of these changes was surprisingly low, despite Google's modern C++ codebase making heavy use of libc++. Hardening libc++ resulted in an average 0.30% performance impact across our services (yes, only a third of a percent).
This is due to both the compiler's ability to eliminate redundant checks during optimization, and the efficient design of hardened libc++. While a handful of performance-critical code paths still require targeted use of explicitly unsafe accesses, these instances are carefully reviewed for safety. Techniques like profile-guided optimizations further improved performance, but even without those advanced techniques, the overhead of bounds checking remains minimal.
We actively monitor the performance impact of these checks and work to minimize any unnecessary overhead. For instance, we identified and fixed an unnecessary check, which led to a 15% reduction in overhead (reduced from 0.35% to 0.3%), and contributed the fix back to the LLVM project to share the benefits with the broader C++ community.
While hardened libc++'s overhead is minimal for individual applications in most cases, deploying it at Google's scale required a substantial commitment of computing resources. This investment underscores our dedication to enhancing the safety and security of our products.
From tests to production
Enabling libc++ hardening wasn't a simple flip of a switch. Rather, it required a multi-stage rollout to avoid accidentally disrupting users or creating an outage:
Testing: We first enabled hardened libc++ in our tests over a year ago. This allowed us to identify and fix hundreds of previously undetected bugs in our code and tests.
Baking: We let the hardened runtime "bake" in our testing and pre-production environments, giving developers time to adapt and address any new issues that surfaced. We also conducted extensive performance evaluations, ensuring minimal impact to our users' experience.
Gradual Production Rollout: We then rolled out hardened libc++ to production over several months, starting with a small set of services and gradually expanding to our entire infrastructure. We closely monitored the rollout, promptly addressing any crashes or performance regressions.
Quantifiable impact
In just a few months since enabling hardened libc++ by default, we've already seen benefits.
Preventing exploits: Hardened libc++ has already disrupted an internal red team exercise and would have prevented another one that happened before we enabled hardening, demonstrating its effectiveness in thwarting exploits. The safety checks have uncovered over 1,000 bugs, and would prevent 1,000 to 2,000 new bugs yearly at our current rate of C++ development.
Improved reliability and correctness: The process of identifying and fixing bugs uncovered by hardened libc++ led to a 30% reduction in our baseline segmentation fault rate across production, indicating improved code reliability and quality. Beyond crashes, the checks also caught errors that would have otherwise manifested as unpredictable behavior or data corruption.
Moving average of segfaults across our fleet over time, before and after enablement.
Easier debugging: Hardened libc++ enabled us to identify and fix multiple bugs that had been lurking in our code for more than a decade. The checks transform many difficult-to-diagnose memory corruptions into immediate and easily debuggable errors, saving developers valuable time and effort.
Bridging the gap with memory-safe languages
While libc++ hardening provides immediate benefits by adding bounds checking to standard data structures, it's only one piece of the puzzle when it comes to spatial safety.
We're expanding bounds checking to other libraries and working to migrate our code to Safe Buffers, requiring all accesses to be bounds checked. For spatial safety, both hardened data structures, including their iterators, and Safe Buffers are necessary.
Beyond improving the safety of our C++, we're also focused on making it easier to interoperate with memory-safe languages. Migrating our C++ to Safe Buffers shrinks the gap between the languages, which simplifies interoperability and potentially even an eventual automated translation.
Building a safer C++ ecosystem
Hardened libc++ is a practical and effective way to enhance the safety, reliability, and debuggability of C++ code with minimal overhead. Given this, we strongly encourage organizations using C++ to enable their standard library's hardened mode universally by default.
At Google, enabling hardened libc++ is only the first step in our journey towards a spatially safe C++ codebase. By expanding bounds checking, migrating to Safe Buffers, and actively collaborating with the broader C++ community, we aim to create a future where spatial safety is the norm.
Acknowledgements
We’d like to thank Emilia Kasper, Chandler Carruth, Duygu Isler, Matthew Riley, and Jeff Vander Stoep for their helpful feedback. We also extend our thanks to the libc++ community for developing the hardening mode that made this work possible.
Based on manual analysis of CVEs from July 15, 2014 to Dec 14, 2023. Note that we could not classify 11% of CVEs.. ↩
Real-time protection, built with your privacy in mind.
Real-time defense, right on your device: Scam Detection uses powerful on-device AI to notify you of a potential scam call happening in real-time by detecting conversation patterns commonly associated with scams. For example, if a caller claims to be from your bank and asks you to urgently transfer funds due to an alleged account breach, Scam Detection will process the call to determine whether the call is likely spam and, if so, can provide an audio and haptic alert and visual warning that the call may be a scam.
Private by design, you’re always in control: We’ve built Scam Detection to protect your privacy and ensure you’re always in control of your data. Scam Detection is off by default, and you can decide whether you want to activate it for future calls. At any time, you can turn it off for all calls in the Phone app Settings, or during a particular call. The AI detection model and processing are fully on-device, which means that no conversation audio or transcription is stored on the device, sent to Google servers or anywhere else, or retrievable after the call.
Cutting-edge AI protection, now on more Pixel phones:Gemini Nano, our advanced on-device AI model, powers Scam Detection on Pixel 9 series devices. As part of our commitment to bring powerful AI features to even more devices, this AI-powered protection is available to Pixel 6+ users thanks to other robust Google on-device machine learning models.
We’re now rolling out Scam Detection to English-speaking Phone by Google public beta users in the U.S. with a Pixel 6 or newer device.
To provide feedback on your experience, please click on Phone by Google App -> Menu -> Help & Feedback -> Send Feedback. We look forward to learning from this beta and your feedback, and we’ll share more about Scam Detection in the months ahead.
More real-time alerts to protect you from bad apps
Google Play Protect works non-stop to protect you in real-time from malware and unsafe apps. Play Protect analyzes behavioral signals related to the use of sensitive permissions and interactions with other apps and services.
With live threat detection, if a harmful app is found, you'll now receive a real-time alert, allowing you to take immediate action to protect your device. By looking at actual activity patterns of apps, live threat detection can now find malicious apps that try extra hard to hide their behavior or lie dormant for a time before engaging in suspicious activity.
At launch, live threat detection will focus on stalkerware, code that may collect personal or sensitive data for monitoring purposes without user consent, and we will explore expanding its detection to other types of harmful apps in the future. All of this protection happens on your device in a privacy preserving way through Private Compute Core, which allows us to protect users without collecting data.
Live threat detection with real-time alerts in Google Play Protect are now available on Pixel 6+ devices and will be coming to additional phone makers in the coming months.