The issue here is that if size_of_buffer is smaller than sizeof(uint32), the result would be a huge value, which was then used as input to the following function:

static size_t ComputeSize(size_t num_results) { return sizeof(T) * num_results + sizeof(uint32); } 

This calculation then overflowed and made the result of this function zero, instead of a value at least equal to sizeof(uint32). Using this, Pinkie was able to write eight bytes of his choice past the end of his buffer. The buffer in this case is one of the GPU transfer buffers, which are mapped in both processes’ address spaces and used to transfer data between the Native Client and GPU processes. The Windows allocator places the buffers at relatively predictable locations; and the Native Client process can directly control their size as well as certain object allocation ordering. So, this afforded quite a bit of control over exactly where an overwrite would occur in the GPU process.

The next thing Pinkie needed was a target that met two criteria: it had to be positioned within range of his overwrite, and the first eight bytes needed to be something worth changing. For this, he used the GPU buckets, which are another IPC primitive exposed from the GPU process to the Native Client process. The buckets are implemented as a tree structure, with the first eight bytes containing pointers to other nodes in the tree. By overwriting the first eight bytes of a bucket, Pinkie was able to point it to a fake tree structure he created in one of his transfer buffers. Using that fake tree, Pinkie could read and write arbitrary addresses in the GPU process. Combined with some predictable addresses in Windows, this allowed him to build a ROP chain and execute arbitrary code inside the GPU process.

The GPU process is still sandboxed well below a normal user, but it’s not as strongly sandboxed as the Native Client process or the HTML renderer. It has some rights, such as the ability to enumerate and connect to the named pipes used by Chrome’s IPC layer. Normally this wouldn’t be an issue, but Pinkie found that there’s a brief window after Chrome spawns a new renderer where the GPU process could see the renderer’s IPC channel and connect to it first, allowing the GPU process to impersonate the renderer (bug 117627).

Even though Chrome’s renderers execute inside a stricter sandbox than the GPU process, there is a special class of renderers that have IPC interfaces with elevated permissions. These renderers are not supposed to be navigable by web content, and are used for things like extensions and settings pages. However, Pinkie found another bug (117417) that allowed an unprivileged renderer to trigger a navigation to one of these privileged renderers, and used it to launch the extension manager. So, all he had to do was jump on the extension manager’s IPC channel before it had a chance to connect.

Once he was impersonating the extensions manager, Pinkie used two more bugs to finally break out of the sandbox. The first bug (117715) allowed him to specify a load path for an extension from the extension manager’s renderer, something only the browser should be allowed to do. The second bug (117736) was a failure to prompt for confirmation prior to installing an unpacked NPAPI plug-in extension. With these two bugs Pinkie was able to install and run his own NPAPI plug-in that executed outside the sandbox at full user privilege.

So, that’s the long and impressive path Pinkie Pie took to crack Chrome. All the referenced bugs were fixed some time ago, but some are still restricted to ensure our users and Chromium embedders have a chance to update. However, we’ve included links so when we do make the bugs public, anyone can investigate in more detail.

In an upcoming post, we’ll explain the details of Sergey Glazunov’s exploit, which relied on roughly 10 distinct bugs. While these issues are already fixed in Chrome, some of them impact a much broader array of products from a range of companies. So, we won’t be posting that part until we’re comfortable that all affected products have had an adequate time to push fixes to their users.


When executing JavaScript, V8 at first compiles it to machine code with a very fast compiler that doesn't optimize the code it produces. V8 has a second, optimizing compiler that generates much faster machine code, but takes much more time to do so, so it has to be used selectively. That's why V8 must try to predict which functions will benefit most from optimization, and carefully decide when to optimize them.

In the past, V8 stopped once every millisecond to look at currently running functions, and eventually optimized them. For long-running programs, this worked great, but short-running programs often finished before they could benefit much from the optimizing compiler -- a single millisecond can be a long time to wait before optimizing! In addition, V8 often made different optimization decisions each time a JavaScript program ran, sometimes overlooking small but performance-critical functions.

The new version of V8 makes earlier and more repeatable optimization decisions by analyzing the running program in more detail. It uses counters to keep track of how often JavaScript functions are called and loops are executed in a program, approximating the time spent inside each function. That way V8 is able to quickly gather fine-grained information about performance bottlenecks in a JavaScript program, and to make sure that the optimizing compiler's efforts are spent on those functions that deserve it most.

The new algorithm is contained in the current beta channel releases -- go give it a spin now!