The benchmarks that benefit the most from Crankshaft are Richards, DeltaBlue and Crypto. This shows that we have taken the performance of JavaScript property accesses, arithmetic operations, tight loops, and function calls to the next level. Overall, Crankshaft boosts V8’s performance by 50% on the V8 benchmark suite. This is the biggest performance improvement since we launched Chrome in 2008.
In addition to improving peak performance as measured by the V8 benchmark suite, Crankshaft also improves the start-up time of web applications such as GMail. Our page cycler benchmarks show that Crankshaft improves the page load performance of Chrome by 12% for pages that contain significant amounts of JavaScript code.
Crankshaft uses adaptive compilation to improve both start-up time and peak performance. The idea is to heavily optimize code that is frequently executed and not waste time optimizing code that is not. Because of this, benchmarks that finish in just a few milliseconds, such as SunSpider, will show little improvement with Crankshaft. The more work an application does, the bigger the gains will be.
Crankshaft has four main components:
A base compiler which is used for all code initially. The base compiler generates code quickly without heavy optimizations. Compilation with the base compiler is twice as fast as with the V8 compiler in Chrome 9 and generates 30% less code.
A runtime profiler which monitors the running system and identifies hot code, i.e., code that we spend a significant amount of the time running.
An optimizing compiler which recompiles and optimizes hot code identified by the runtime profiler. It uses static single assignment form to perform optimizations such as loop-invariant code motion, linear-scan register allocation and inlining. The optimization decisions are based on type information collected while running the code produced by the base compiler.
Deoptimization support which allows the optimizing compiler to be optimistic in the assumptions it makes when generating code. With deoptimization support, it is possible to bail out to the code generated by the base compiler if the assumptions in the optimized code turn out to be too optimistic.
V8 with Crankshaft for the 32-bit Intel architecture is available today in the V8 bleeding edge repository and in canary builds of Chrome. Work on the ARM and 64-bit ports has started.
We are excited about the JavaScript speed improvements we are delivering with Crankshaft today. Crankshaft provides a great infrastructure for the next wave of JavaScript speed improvements in V8 and we will continue to push JavaScript performance to enable the next generation of web applications.
Kevin Millikin, Software Engineer and Florian Schneider, Software Engineer
Today we have released version 6 of the V8 benchmark suite. The main changes are in the RegExp and Splay components of the benchmark suite. For reference, we describe each of the existing benchmarks in the suite below, along with any changes made in version 6.
RegExp: Regular expression benchmark generated by extracting regular expression operations from 50 of the most popular web pages. The regular expressions are exercised a number of times to reflect their popularity on those top 50 web pages. Changed in version 6: each regular expression is now exercised on a number of different input strings instead of just one.
Splay: Data manipulation benchmark that modifies a large splay tree to exercise the automatic memory management subsystem. The benchmark builds a large splay tree in a setup phase and then measures how fast nodes can be added and removed. Changed in version 6: no longer converts the same numeric key to string repeatedly and updates the splay tree in a way that increases the pressure on the memory management subsystem.
Richards: Operating system kernel simulation benchmark originally written in BCPL by Martin Richards. The Richards benchmark effectively measures how fast the JavaScript engine is at accessing object properties, calling functions, and dealing with polymorphism. It is a standard benchmark that has been successfully used to measure the performance of many modern programming language implementations.
DeltaBlue: One-way constraint solver, originally written in Smalltalk by John Maloney and Mario Wolczko. The DeltaBlue benchmark is written in an object-oriented style with a multi-level class hierarchy. As such it measures how fast the JavaScript engine is at running well-structured applications with many objects and small functions. Changed in version 6: fixed a couple of typos that do not have any impact on the behavior of the benchmark.
Crypto: Encryption and decryption benchmark based on code by Tom Wu. The benchmark encrypts an input string, decrypts the result and verifies that encryption followed by decryption yields the original input. The encryption/decryption algorithm is RSA and the benchmark measures the performance of arithmetic operations on integers and array access.
RayTrace: Ray tracer benchmark based on code by Adam Burmister. The benchmark measures floating-point computations where the object structure is constructed using the Prototype JavaScript library. Changed in version 6: removed dead code that has no impact on the behavior of the benchmark.
EarleyBoyer: Classic Scheme benchmarks, translated to JavaScript by Florian Loitsch's Scheme2Js compiler. The benchmarks exercise important areas of the JavaScript engine such as object allocation, data structure manipulation, and garbage collection. The translated nature of the benchmarks make them appear foreign, but the runtime characteristics of the benchmarks are highly representative of many real world web applications.
Curious to know how your browser performs? Give it a spin on the new version of the V8 benchmark suite.
The Google Chrome team is hitting the road. From now through October, we’re giving 21 talks about HTML5 and related Google Chrome topics at 16 events, in 16 cities and 9 countries, and on 4 continents. Phew!
Check out our schedule below. Registration for almost all these events is open, so come say hi and learn a thing or two about HTML5.
d = document.getElementById("c"); c = d.getContext("2d"); ... i = 25; while (i--) { c.beginPath(); ... q = (R / r - 1) * t; // create hypotrochoid from current mouse position, and setup variables (see: http://en.wikipedia.org/wiki/Hypotrochoid) x = (R - r) * C(t) + D * C(q) + ... y = (R - r) * S(t) - D * S(q) + ... if (a) { // draw once two points are set c.moveTo(a, b); c.lineTo(x, y) } c.strokeStyle = "hsla(" + (U % 360) + ",100%,50%,0.75)"; // draw rainbow hypotrochoid c.stroke(); ... } *note: ellipses replace code
The images above were created with SVG and canvas, but as you can see from the code, they approach graphics in a very different way: SVG allows you to provide markup that describes an image, whereas canvas allows you to describe a set of sequential steps that draw an image in JavaScript. These approaches mean that a developer changes an image that’s already been drawn, such as when animating an image, in different ways. Because the browser keeps a full representation of an SVG image, changing just a parameter in the image is enough to cause the browser to redraw the image correctly. With canvas, on the other hand, the developer must clear the image and specify all the steps to draw it again with the desired changes.
Today, modern browsers, including Firefox, Safari, Opera and Google Chrome support creating 2D graphics with these technologies, and Internet Explorer is adding support for them in the upcoming version 9 release.
CSS Transforms: easy to use 2D and 3D effects
Even today, people primarily use apps that don’t strictly require advanced graphics, but eye candy like 3D transforms, transitions and reflections still help improve the experience of everyday tasks. While canvas could be used to create many of these effects, it can’t render them efficiently, and it would be hard to integrate with the other content on the page.
CSS transforms and animations, which first appeared in WebKit in 2007, allow developers to achieve commonly used effects easily by specifying parameters in CSS that are applied to content in the DOM. In 2009, WebKit began adding 3D CSS transforms and effects, which takes flat content on the page and makes it appear as if it were in 3D space.
As you can see from this example, 3D CSS transforms and animations make it easy to add polished 3D effects to your app. Now that we support hardware compositing in Chromium, it’s easy to perform these transforms on the GPU and display it quickly on screen, so we’ve enabled them along with the compositor. Currently, this functionality is only available in Safari and Google Chrome but Firefox is working on an implementation as well, making it a great option to add impressive effects to your app in the future.
WebGL: Low-level dynamic 3D
While 3D CSS makes it easy to display 2D content so that it looks like it’s in a 3D space, it’s not designed for writing true 3D applications like CAD software or modern games. WebGL, on the other hand, provides access to all the functionality of OpenGL ES 2.0 from JavaScript, and is designed with exactly these types of applications in mind.
With WebGL you can navigate 3D environments, rotate around objects with volume, add realistic lighting, and render shadows and reflections like those above. Creating a scene like this just wouldn’t be possible in real-time with 3D CSS, let alone a 2D canvas or SVG. To achieve these effects, you need direct access to graphics hardware – which is exactly what WebGL provides.
With power comes complexity, so there is definitely a learning curve to using WebGL. The good news is that because it’s based on OpenGL ES 2.0, it should be familiar to people with graphics programming experience. A number of JavaScript libraries that make WebGL more accessible are already available, for example, the examples above use two frameworks: SpiderGL and the WebGL implementation of O3D. As the technology matures, expect to see other tools and libraries emerge to make it even easier to author content. A popular blog in the community, Learning WebGL has done a great job of keeping up with the latest libraries, tools and demos and has a substantial archive of WebGL resources.
Mozilla, Apple, Opera and Google are all working on putting the finishing touches on the WebGL spec in a Khronos working group, which is expected to hit v1.0 by the end of the year, but we’ve turned it on by default to get early feedback on Chromium’s implementation.
Thanks for reading to the end, we hope this helps explain the current state of graphics on the Web!
Recently, Flock released a new beta version of their browser built on top of the Chromium codebase. For those of us in the Chromium project, this is extremely exciting and encouraging. We believe that users having a choice between multiple browsers is a great thing, as it spurs innovation and competition, and lets users choose a browser that provides the best experience for them. Flock brings an innovative approach to their "social web browser," and we are glad to welcome them into the Chromium community. As part of that, we wanted to offer the team behind Flock an opportunity to talk about the ideas behind Flock, how Chromium helped them in achieving their goals, and their vision for the future. What follows is a perspective from Clayton Stark, VP Engineering at Flock.
When Flock began developing its first web browser five years ago, "the social web" was a small, niche market. Today social is the mainstream web, and this evolution in the market drove our development roadmap. With the new Flock browser, our engineering team focused on designing a straightforward and integrated social dashboard that delivers an experience simple enough for a mass audience. This is where the technology behind Chromium came into the picture for Flock. As Chromium emerged, we saw that not only was there significant improvements to performance, but also apparent was a simple and elegant user interface and architecture across all the various systems.
A core goal of new Flock is to keep our users in touch with all of their friends and feeds with a minimum of configuration, and at the same time, make it fun and simple. With all of the users’ feeds and social activity streams flowing into the scrolling sidebar, we knew the performance had to be first-rate, and that techniques we used for earlier versions of Flock were unlikely to perform at scale. With Chromium under the hood, we were able to leverage web workers, and that, combined with the raw horsepower of V8, allowed us to scale the use of the sidebar to manage very large data sets (in the first few weeks after the beta launched we saw a few hundred million activities flowing into Flock’s sidebar). Most importantly, benchmark testing shows us that New Flock with Chromium performs in the top-tier of all browsers available in the market.
Clearly the web is evolving very quickly, and we are seeing more and more people discovering content through their friends. The Flock team is energized by the big developments coming fast in this emerging, interest-graph-enabled web, and we have a roadmap in front of us that we are really excited about. The browsing platform needs to continue to mature at a rapid pace to support the dramatic changes in online user behavior. And, as it does, we already see the performance and power in Chromium that we need to allow us to focus on the innovations we want to bring forward, on top of the platform.
So, I’d like to send out a huge thanks on behalf of the Flock team to all those who have contributed to the Chromium project. Your work has made our project possible, and made new Flock our best release ever.
Now in Google Chrome Beta, developers can do just that. The new context menu API allows extension developers to register menu items for all pages or for a subset of pages. Developers can also register menu items for specific operations, like right-clicking on an image or movie. For example, you could create an extension that makes it easy for users to share interesting images from images.google.com with their friends on Google Buzz.
Some users have lots of extensions installed. To help these users avoid ending up with gigantic unwieldy context menus, Google Chrome automatically groups multiple menu items from the same extension into a sub-menu.
We’d also like to announce two new experimental APIs. These APIs aren’t quite ready for prime-time yet, but we’re really excited about them and couldn’t wait to get your feedback.
The omnibox API allows extension developers to integrate with the browser’s omnibox. With this API, you can build custom search support for your favorite website, keyboard macros to automate tasks, or even a chat client right into the omnibox.
The infobars API allows extension developers to display infobars across the top of a tab. These infobars are built using normal HTML, so they can be heavily customized and interactive.
For the complete list of new extension APIs in Google Chrome beta, see the docs. And let us know if you make something cool. If we like it, we’ll send you a free extensions hoodie and may even feature you in the gallery.
If you want to not only get up to speed, but understand the browser differences and techniques for a robust implementation, please take a look through the new guides for implementing HTML5 video, understanding "offline,"auditing your webapp with the Chrome developer tools, and using web workers and @font-face. You can now comment about your experiences with these features and stay up to date on new content via our new RSS feed.
We're also sharing the new HTML5 Studio, a collection of samples of these features in use, with code you can learn from and hack on.
If you'd like to contribute code, guides, or samples, please get in touch on the bug tracker or on @ChromiumDev. We'd love to incorporate your work.
Posted by Paul Irish, Google Chrome Developer Relations
Measuring web page load time is a notoriously tricky but importantendeavor. One of the most common challenges is simply getting a true start time. Historically, the earliest a web page could reliably begin measurement is when the browser begins to parse an HTML document (by marking a start time in a <script> block at the top of the document).
Unfortunately, that is too late to include a significant portion of the time web surfers spend waiting for the page: much of the time is spent fetching the page from the web server. To address this shortcoming, some clever web developers work around the problem by storing the navigation start time in a cookie during the previous page’s onbeforeunload handler. However, this doesn’t work for the critical first page load which likely has a cold cache.
Web Timing now gives developers the ability to measure the true page load time by including the time to request, generate, and receive the HTML document. The timeline below illustrates the metrics it provides. The vertical line labeled "Legacy navigation started" is the earliest time a web page can reliably measure without Web Timing. In this case, instead of a misleading 80ms load time, it is now possible to see that the user actually experienced a 274ms time. Including this missing phase will make your measurements appear to increase. It’s not because pages are getting slower – we’re just getting a better view on where the time is actually being spent.
Across other browsers: Web Timing metrics are under window.msPerformance in the third platform preview of Internet Explorer 9 and work is underway to add window.mozPerformance to Firefox. The specification is still being finalized, so expect slight changes before the browser prefixes are dropped. If you’re running a supported browser, please try the Web Timing demonstration and send us feedback.