Back in 2000, sheets worked well because the smallest unit of user interaction with an application was a window. Soon after, though, things started to change. Web browsers in particular were among the first to start using tabs to put more than one document in a window. This caused a snag. A web page can require modal interaction from the user: picking a file, or supplying a username and password. Yet we don't want to prevent the user from switching to a different tab and continuing to interact with other websites. If the finest-grained modality control we have is per-window, how can we achieve that outcome?
Chromium's current answer comes from combining Cocoa's child window support with sheets to get tab-modal sheets:
While this looks like a normal sheet, you can switch between open tabs while the password request is up. You can't, however, interact with the web page.
The implementation, like all of the code used in Chromium, is open source, and can be found in the Google Toolbox for Mac, a collection of reusable components from the Mac developers at Google. The technical details of the GTMWindowSheetController can be found on the Google Mac blog. The other thing to note is that right now tab-modal sheets are only used for website authentication. The other sheets we use (for file selection, etc) are currently window-modal; we hope to convert them over soon.
The fate of tab modal sheets, however, isn't certain. A way to enforce tab-modal interaction is certainly needed. But is attaching sheets to the tabs the right way to achieve that goal? At the last WWDC, I talked to some graphic designers who were opposed to the idea. "Reusing sheets in a context that isn't window modality will only confuse the user!" On the other hand, my position is that the concept of modality is the same, and the context is similar enough that users will find that sheets help them understand the modality in which they must interact.
So the story isn't over. Tab-modal sheets are our contribution to the ongoing discussion, an experiment to see what works and what doesn't. Together we can work out the best way to help users interact with their computers.
A malicious extension is an extension written by an ill-intentioned developer. For example, a malicious extension might record your passwords and send them to back to a central server. The tricky part about defending against malicious extensions is that there are well-intentioned extensions that do exactly the same thing. Our defenses against malicious extensions focus on helping the user avoid installing malicious extensions in the first place:
We expect most users to install extensions from the gallery, where each extension has a reputation. We expect malicious extensions will have a low reputation and will have difficulty attracting many users. If a malicious extension is discovered in the gallery, we will remove it from the gallery.
When installing extensions outside the gallery, the user experience for installing an extension is very similar to the experience for running a native executable. If an attacker can trick the user into installing a malicious extension, the attacker might as well trick the user into running a malicious executable. In this way, the extension system avoids increasing the attack surface.
To help protect against vulnerabilities in benign-but-buggy extensions, we employ the time-tested principles of least privilege and privilege separation. Each extension declares the privileges it needs in its manifest. If the extension is later compromised, the attacker will be limited to those privileges. For example, the Gmail Checker extension declares that it wishes to interact with Gmail. If the extension is somehow compromised, the attacker will not be granted the privilege to access your bank.
To achieve privilege separation, each extension is divided into two pieces, a background page and content scripts. The background page has the lion's share of the extensions privileges but is isolated from direct contact with web pages. Content scripts can interact directly with web pages but are granted few additional privileges. Of course, the two can communicate, but dividing extensions into these components means a vulnerability in a content script does not necessarily leak all the extension's privileges to the attacker.
Finally, we utilize our multi-process architecture and sandboxing technology to provide strong isolation between web content, extensions, and the browser. Extensions run in a separate operating system process from the browser kernel and from web content, helping prevent malicious web sites from compromising extensions and malicious extensions from compromising the browser kernel. To facilitate rich interaction, content scripts run in-process with web content, but we run content scripts in an "isolated world" where they are protected from the page's JavaScript.
Of course, attackers will write malicious extensions and well-intentioned developers will write buggy extensions. The extension system improves security by making it easier for developers to write secure extensions. If you would like to learn more about the security of the extension system, you can watch our video or read our academic paper describing all the details.
We've done tech talks before, but this time we asked Chromium developers what they'd most like to hear about. Once we knew what was most in demand, we found experts on each subject and asked them to make a presentation. The talks were given before a live studio audience of Googlers last Friday with extra attention paid to creating high quality recordings. Now we're excited to make these widely available to all Chromium contributors!
The WebKit API
with Darin Fisher
Darin Fisher talks about the recently upstreamed Chromium WebKit API. The API is a critical step in our path to becoming completely integrated into the WebKit project. Like the other WebKit APIs, ours is a veneer which shields developers (including many of our own) from the internal details of WebKit (named WebCore). Darin talks at a high level about the API, dives into some code examples, and talks about the history and future of the API.
Layout Tests
with Pam Greene
Layout Tests are the tests we inherit from the WebKit project and are a very important part of the Chromium's testing infrastructure. Pam Greene talks about what they are, how to run them, how to debug problems within them, and even touches on how to write your own. She also covers advanced (but easy to use) tools for rebaselining and tracking flakyness. Any Chromium developer that works on WebKit really should check this out!
Painting in Chromium
with Brett Wilson
Because of Chromium's multi-process architecture, painting within Chromium is far from typical. In this talk, Brett Wilson starts from Skia and the WebKit render tree, follows the bits across the process boundaries, and continues all the way to your screen. He also details many of the differences in painting between platforms, how things work in test shell, and interesting corner cases like resizing.
WebKit's Guts
with Eric Seidel
A large percentage of Chromium's code (and part of what makes it so fast) is WebKit. In this talk, Eric Seidel gives us a 30,000 foot view of how WebKit actually renders a page. He starts with how resources are loaded, explains how they're parsed into a DOM tree, and then talks about the various trees involved in rendering. In addition, he touches on many other important topics like hit testing (figuring out what you're hovering over and clicking on). This is a must-see for anyone working on the guts of WebKit.
This is not ready for consumers yet — everything you see can and probably will change by the time Google Chrome OS-based devices are available late next year.
Please note that Google has not released an official binary for Chromium OS and therefore if you download Chromium OS binaries please ensure that you trust the site you are downloading them from.
While we will try our best to help you through the Chromium discussion forums, we will not be officially supporting any of these builds. Remember that you are downloading that specific site/developer's build of Chromium OS.
We have also received a number of questions that we wanted to answer directly and so we put together the following FAQ to clarify some of these issues.
One of the top questions has been around the distinction between Google Chrome OS and Chromium OS. Google Chrome OS is to Chromium OS what Google Chrome browser is to Chromium. Chromium OS is the open source project, used primarily by developers, with code that is available for anyone to checkout, modify and build their own version with. Meanwhile, Google Chrome OS is the Google product that OEMs will ship on Netbooks next year. Therefore, dear developers who have built and posted Chromium OS binaries, you're awesome and we appreciate what you are doing, however we have to ask you to call the binaries you've put up for download "Chromium OS" and not "Google Chrome OS".
See Getting a WebGL Implementation for instructions on getting a Chromium build and enabling WebGL support. This is an early version with many caveats, but with it you can get a taste of the new functionality coming to the web.
Shiny teapot demo illustrating compositing of 3D graphics with the web page
Particle system demo showing how to use the GPU to animate particles
The WebGL wiki is the central location for information about the evolving specification, including the draft spec, introductory articles, tutorials, mailing lists and forums. See the WebGL demo repository for more demos and instructions on how to check out their source code.
We're looking forward to finalizing the WebGL specification and making this functionality available to web developers, and look forward to your feedback. For Chromium-specific questions, use the Chromium-dev mailing list, for more general WebGL questions, use the WebGL forums or WebGL public mailing list.
The Web Sockets API enables web applications to handle bidirectional communications with server-side process in a straightforward way. Developers have been using XMLHttpRequest ("XHR") for such purposes, but XHR makes developing web applications that communicate back and forth to the server unnecessarily complex. XHR is basically asynchronous HTTP, and because you need to use a tricky technique like long-hanging GET for sending data from the server to the browser, simple tasks rapidly become complex. As opposed to XMLHttpRequest, Web Sockets provide a real bidirectional communication channel in your browser. Once you get a Web Socket connection, you can send data from browser to server by calling a send() method, and receive data from server to browser by an onmessage event handler. A simple example is included below.
if ("WebSocket" in window) { var ws = new WebSocket("ws://example.com/service"); ws.onopen = function() { // Web Socket is connected. You can send data by send() method. ws.send("message to send"); .... }; ws.onmessage = function (evt) { var received_msg = evt.data; ... }; ws.onclose = function() { // websocket is closed. }; } else { // the browser doesn't support WebSocket. }
In addition to the new Web Sockets API, there is also a new protocol (the "web socket protocol") that the browser uses to communicate with servers. The protocol is not raw TCP because it needs to provide the browser's "same-origin" security model. It's also not HTTP because web socket traffic differers from HTTP's request-response model. Web socket communications using the new web socket protocol should use less bandwidth because, unlike a series of XHRs and hanging GETs, no headers are exchanged once the single connection has been established. To use this new API and protocol and take advantage of the simpler programming model and more efficient network traffic, you do need a new server implementation to communicate with — but don't worry. We also developed pywebsocket, which can be used as an Apache extension module, or can even be run as standalone server.
You can use Google Chrome and pywebsocket to start implementing Web Socket-enabled web applications now. We're more than happy to hear your feedback not only on our implementation, but also on API and/or protocol design. The protocol has not been completely locked down and is still in discussion in IETF, so we are especially grateful for any early adopter feedback.
Posted by Yuzo Fujishima (藤島 勇造), Fumitoshi Ukai (鵜飼 文敏), and Takeshi Yoshino (吉野 剛史), Software Engineers
In many cases, though, Google Chrome needs to keep pages from related tabs in the same process, since they may access each other's contents using JavaScript code. For example, clicking links that open in a new window will generally cause the new and old pages to share a process.
In practice, web developers may find situations where they would like links to other pages to open in a separate process. As one example, links from messages in your webmail client would be nice to isolate from the webmail client itself. This is easy to achieve now, thanks to new support in WebKit for HTML5's "noreferrer" link relation.
To cause a link to open in a separate process from your web page, just add rel="noreferrer" and target="_blank" as attributes to the <a> tag, and then point it at a URL on a different domain name. For example:
In this case, Google Chrome knows that the page will be opened in a new window, that no referrer information will be passed to the new page, and that the window.opener value will be null in the new page. As a result, the two pages cannot script each other, so Chrome can load them in separate processes. Google Chrome will still keep same-site pages in the same process, to allow them to share caches and minimize overhead.
First, our tools benefited from improvements that the WebKit team made to Web Inspector (our developer tools are partially based on Web Inspector). Second, from our end, we recently released the heap profiler and the timeline tab in Google Chrome's Developer Channel.
With the heap profiler you can now take a snapshot of the JavaScript heap at any point in time. A heap snapshot helps you understand memory usage, and by comparing snapshots you can also follow memory usage over time. You will find the heap profiler in the profiles tab along with the sample-based CPU profiler.
The new timeline view gives you a complete overview of where time is spent when loading a web app. All events -- ranging from loading resources over parsing and executing JavaScript to calculating styles and repainting -- are plotted on a timeline.
Besides these product improvements, we've tried to make the Google Chrome Developer tools easier to find and understand by putting together a mini site with tutorials and videos.
To take our newest release for a spin, get Google Chrome from the Developer Channel and you'll automatically be brought up to date. We welcome your feedback and your contributions to improve developer tools in WebKit and Google Chrome even more.
Posted by Pavel Feldman, Software Engineer and Anders Sandholm, Product Manager
You can find all the info to write an extension in our docs. Once your extension is ready for the gallery, you'll need to upload a zip file of your code and an icon that helps users distinguish your extension. You'll also have the option to submit text, screenshots and/or YouTube videos that describe the functionality of your extension. All types of extensions are welcome in the gallery, provided they comply with our Terms of Service.
For most extensions, the review process is fully automated. The only extensions we'll review manually are those that include an NPAPI component and all content scripts that affect "file://" URLs. For security reasons, developers of these types of extensions will need to provide some additional information before they can post them in the gallery.
Once an extension is uploaded, our gallery takes care of packaging and signing. Updating an extension is also incredibly easy — all a developer needs to do is to upload a new file in the gallery. Finally, to further help developers, in the next few days, we plan to open up the gallery to a small group of trusted testers. They will provide developers with insights and bug reports that will help them polish their extensions ahead of our beta launch.
We can't wait to share all the great extensions that you'll submit with all of Google Chrome's users. In the meantime, we encourage you to submit any bugs you find in the upload process to our Issue Tracker and to ask all relevant questions in our discussion group.
We are doing this early, almost a year before Google Chrome OS will be ready for users, because we are eager to engage with open source developers. There are many of you who share our passion for creating a new model of computing. Chromium OS makes it possible for any interested developer to contribute code, ideas and designs to help shape the future of personal computing.
Speed, simplicity and security are fundamental to Chrome OS. We wanted to talk about these areas in a bit more detail.
Speed
Simplicity
Security
Open Source
We expect to publish additional design docs and documentation in the upcoming few months. You can track what we're doing on this blog and we hope you will join us in this effort.
Posted by Glen Murphy, Martin Bligh, Will Drewry, Software Engineers
We started working on SPDY while exploring ways to optimize the way browsers and servers communicate. Today, web clients and servers speak HTTP. HTTP is an elegantly simple protocol that emerged as a web standard in 1996 after a series of experiments. HTTP has served the web incredibly well. We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support.
So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance - pages loaded up to 55% faster. There is still a lot of work we need to do to evaluate the performance of SPDY in real-world conditions. However, we believe that we have reached the stage where our small team could benefit from the active participation, feedback and assistance of the web community.
For those of you who would like to learn more and hopefully contribute to our experiment, we invite you to review our early stage documentation, look at our current code and provide feedback through the Chromium Google Group.
Posted by Mike Belshe, Software Engineer and Roberto Peon, Software Engineer
We're building Google Chrome Frame to help web developers deliver faster, richer applications like Google Wave. Recent JavaScript performance improvements and the emergence of HTML5 have enabled web applications to do things that could previously only be done by desktop software. One challenge developers face in using these new technologies is that they are not yet supported by Internet Explorer. Developers can't afford to ignore IE — most people use some version of IE — so they end up spending lots of time implementing work-arounds or limiting the functionality of their apps.
With Google Chrome Frame, developers can now take advantage of the latest open web technologies, even in Internet Explorer. From a faster Javascript engine, to support for current web technologies like HTML5's offline capabilities and <canvas>, to modern CSS/Layout handling, Google Chrome Frame enables these features within IE with no additional coding or testing for different browser versions.
To start using Google Chrome Frame, all developers need to do is to add a single tag:
When Google Chrome Frame detects this tag it switches automatically to using Google Chrome's speedy WebKit-based rendering engine. It's that easy. For users, installing Google Chrome Frame will allow them to seamlessly enjoy modern web apps at blazing speeds, through the familiar interface of the version of IE that they are currently using.
We believe that Google Chrome Frame makes life easier for web developers as well as users. While this is still an early version intended for developers, our team invites you to try out this for your site. You can start by reading our documentation. Please share your feedback in our discussion group and file any bugs you find through the Chromium issue tracker.
Posted by Amit Joshi, Software Engineer, Alex Russell, Software Engineer and Mike Smith, Product Manager
The new FTP implementation was initially written by Ibrar Ahmed single–handedly. It was a long journey for him because he worked on it in his spare time. Ibrar has a master's degree in computer science from International Islamic University. After working as software engineer and associate architect at other companies, he recently started his own tele-medicine company. We thank Ibrar for his contribution to the Chromium network stack!
Paweł Hajdan Jr. started to work on the new FTP code in July as one of his summer intern projects at Google. Paweł added new unit tests, fixed bugs and compatibility issues, and is taking the lead in bringing the new FTP code to production quality.
Finally, we used Mozilla code for parsing and formatting FTP directory listings (ParseFTPList.cpp), which was originally written by Cyrus Patel.
In the near term, the original WinInet-based FTP implementation will still be available as an option on Windows. Specify the --wininet-ftp command-line option to enable it. (The original --new-ftp option is now obsolete and ignored.) During this period we will fix FTP bugs only in the new FTP implementation. When we're happy with the quality of the new FTP code, we will remove the original WinInet-based implementation, finally eliminating our dependency on WinInet.
Please help us achieve that goal by testing FTP with a Dev channel release and filing bug reports. Follow these guidelines when reporting bugs:
Please don't add a comment like "Here is another URL that doesn't work for me" to a bug. Always open a new bug, and give a link to another bug if you think they are similar.
Make the steps to reproduce as detailed as possible, and always include the version number of Chrome.
Check if the problem can be reproduced with --wininet-ftp on Windows and include that information in the bug report.
To activate this feature, launch Google Chrome with the --enable-sync command-line flag. Once you set up sync from the Tools menu, Chrome will then upload and store your bookmarks in your Google Account. Anytime you add or change a bookmark, your changes will be sent to the cloud and immediately broadcast to all other computers for which you've activated bookmark sync (using the same XMPP technology as Google Talk).
For more information on this, please see this email to chromium-dev.
There are a couple of more accurate ways to measure memory utilization in Chromium (or Google Chrome). The easiest is to crack open the task manager that is built into Chromium which tries to account for our memory usage more holistically. If you want even more detail, you can click on "Stats for nerds" which is a link to about:memory.
If you don't fully trust Chromium's task manager or about:memory, the gold standard for measuring memory usage is to look at the system's total commit charge before, during, and after using Chromium. It's a little tricky to get right because you'll need to shut down other services that may kick in while you are running your test. Here's the basic procedure:
Shut down any unnecessary services
Reboot your computer
Using the windows task manager, measure the Total Commit Charge of the system*
Run the application you are seeking to test, in this case, Chromium
Measure the Total Commit Charge again
Close the application
Measure the Total Commit Charge one more time
Subtract your first measurement from your second, and you should have the memory used by Chromium
To validate your test, make sure that the first and last measurement are nearly identical
*On XP, Commit Charge shows up on the bottom of the Windows Task Manager. On Vista, look at the Performance tab of the Windows Task Manager and use the "Memory" number.
For more information on memory usage and how to measure it, check out the Memory Usage Backgrounder on chromium.org.
You can set breakpoints, inspect variables and evaluate expressions all from within Eclipse. The screenshot shows the debugger in action stopped at a breakpoint.
The project is fully open sourced on a BSD-license and consists of two components, an SDK and a debugger. The SDK provides a Java API that enables communication with Google Chrome over TCP/IP. The debugger is an Eclipse plugin that uses the SDK and enables you to debug JavaScript running in Google Chrome from the Eclipse IDE.
We hope this project will help web app developers and welcome feedback as well as contributions.
If you're using extensions now, you should keep in mind that they are powerful software. Extensions integrate with your browser, so they can access and change everything that happens in it. For example, the same technology that enables an extension to periodically check the number of messages in your Gmail inbox could also be used to read all your personal mail and tweet it to your mom! This can happen because of malicious intent or simply because of a bug.
To help protect your experience when using extensions, we recently enabled auto-update for extensions on the dev channel release. Like Chrome's auto-update mechanism, extensions will be updated using the Omaha protocol, giving developers the ability to push out bug fixes and new features rapidly to users of their extensions. This is an important step towards a v1 release of extensions for all users, so we're pretty excited.
In addition, when we turn the extension system on, we plan to offer a gallery with ratings and comments that you can use to judge whether you want to install a particular extension. We will also have processes in place that, combined with reports from users, should help limit the number of malicious extensions that get uploaded and distributed to users. These processes will include removal of extensions that we have reason to believe are malicious. Until these things are in place and the extension system is officially launched, we recommend that you only install extensions that you built yourself.
We have built Google Chrome to address multiple factors that affect browser security. One of the pillars of our approach is to keep the software up to date, so we push out updates to Google Chrome fairly regularly. On the stable channel these are mainly security bug fixes, but the updates are more adventurous and numerous on developer channel.
It is an anathema to us to push out a whole new 10MB update to give you a ten line security fix. We want smaller updates because it narrows the window of vulnerability. If the update is a tenth of the size, we can push ten times as many per unit of bandwidth. We have enough users that this means more users will be protected earlier. A secondary benefit is that a smaller update will work better for users who don't have great connectivity.
Rather then push put a whole new 10MB update, we send out a diff that takes the previous version of Google Chrome and generates the new version. We tried several binary diff algorithms and have been using bsdiff up until now. We are big fans of bsdiff - it is small and worked better than anything else we tried.
But bsdiff was still producing diffs that were bigger than we felt were necessary. So we wrote a new diff algorithm that knows more about the kind of data we are pushing - large files containing compiled executables. Here are the sizes for the recent 190.1->190.4 update on the developer channel:
Full update: 10,385,920 bytes
bsdiff update: 704,512 bytes
Courgette update: 78,848 bytes
The small size in combination with Google Chrome's silent update means we can update as often as necessary to keep users safe.
More information on how Courgette works can be found here.
Soon after the V8 project started we also began work on what would become the Sputnik tests. The goal was to create a test suite based directly on the language spec that checked the behavior of every object, function and individual algorithm in the language. The task was given to a team in Russia – hence the name "Sputnik" – which went about systematically producing tests. As the test suite grew we used it to ensure that V8 conformed to the spec and to detect unexpected changes in our behavior.
Now that the test suite is complete we're happy to be able to release it as an open source project, under the BSD license. We hope Sputnik can be as useful to other implementers of JavaScript as it has been to us, particularly at a time where implementations change rapidly.
The goal is not that all implementations should pass all tests. V8 set out with that intention and we learned the hard way that sometimes you have to be incompatible with the spec to be compatible with the web. Rather, we want Sputnik to be a tool for identifying differences between implementations.
One of the biggest challenges for web developers today is the many incompatibilities between browsers. Finding these differences is the first step towards removing them. In an ideal world web developers would not have to worry about which browser is being used to view their site and users would not have to worry about whether a site supported their browser. We hope the Sputnik tests will make the browser community take another step towards making that a reality.
Posted by Christian Plesner Hansen, Software Engineer
You can invoke new developer tools by selecting "JavaScript console" from the Developer menu (or using Ctrl+Shift+J). For example, running the statistical profiler on the V8 benchmark suite (below screenshot) will give exact information on the actual code execution as the data is generated straight from running the optimized code from V8.
As with the rest of Google Chrome, the developer tools are open source and built upon WebKit and in particular WebKit's Inspector. We would love to get feedback - both in terms of bugs reports and feature requests - on the Chromium public issue tracker. Or even better yet, we would love to get contributions to improving developer tools further in WebKit and Google Chrome.
First of all, we've set up a new discussion group for extension-related topics. Going forward, chromium-extensions will be your one-stop shop for extension development news, feedback and questions. If you're interested in developing extensions, we invite you to join us at chromium-extensions.
Second, as part of the latest dev channel release, we've had to make a breaking change to the crx format. This change adds signatures to our package format, which are necessary to enable automatic updates. Unfortunately, this means that any existing extensions will stop working, and will have to be repackaged.
If you've developed an extension, you can learn how to repackage your extensions for Chrome v 3.0.189.0 in the packaging doc on our developer site. Note that your extension ID will now be your public key, so you'll have to change any code that uses that.
If you're using an extension someone else has developed, you will have to reinstall it once the developer has repackaged it (as described above). We've already updated our sample extensions.
Even though the whole point of the dev channel is to make our APIs available early while they're still changing, we don't make these changes lightly. Once we push the extension system to the stable channel, breaking changes should be very rare (we'd like to say non-existent, but we don't want to jinx ourselves).
Open source projects aren't simply about a runnable binary, they're about the community of users, testers, and developers who devote their time and skills to working on a product they believe in. They go hand in hand: there's no binary without the community and there's no community without the binary. At some point in the life-cycle of a project, you have to stop thinking solely about your small band of developers and start growing the larger supporting community that will become your users, testers, localizers, documentation writers, and possibly even new coders.
In "The Cathedral and the Bazaar", Eric Raymond writes:
"When you start community-building, what you need to be able to present is a plausible promise. Your program doesn't have to work particularly well. It can be crude, buggy, incomplete, and poorly documented. What it must not fail to do is (a) run, and (b) convince potential co-developers that it can be evolved into something really neat in the foreseeable future."
We in the Chromium project feel like our Mac and Linux builds are at this stage, if not beyond it. They run pretty well and demonstrate the fundamental architecture that sets Chromium apart from other browsers. Sure, the bells and whistles aren't all there, but the core functionality of web browsing is. We feel that we've delivered on ESR's "plausible promise" and that it's enough to start attracting those who really want to help make this the best product it can be. We're not done yet, nor is it ready for the average user. It is, however, ready for those who want to live on the bleeding edge and help lend their talents towards completing it.
The community we build today is what will make it a better product down the road, and without that community the product will ultimately suffer. ESR describes testers as "a project's most valuable resource" and my first-hand experience with Camino and Mozilla bear this out. A web browser is a program that accepts an infinite number of inputs and having people who can test webpages the developers wouldn't normally encounter is a tremendous aid. Testing on diverse hardware and software setups is also invaluable as developers tend to only run the latest and greatest (and fastest!). Eventually we might uncover many of these issues on our own, but probably not.
Another pillar of open source, along with releasing early, is releasing often. To that end, the dev channel will automatically receive weekly updates as development continues. You will be able to see the product improving from week to week and help immediately identify when things break. Getting feedback on new features as soon as they are completed helps the developers know if they hit the mark and helps close the feedback loop with the community. The community benefits by being more involved and connected and promoting further transparency in the development process. This wouldn't be possible if we only teased users with releases at widely-spaced intervals when most decisions had been set in stone (end-users who want that can use the beta or release channels).
To answer these questions, it's helpful to know how Google Chrome releases are made, the relationship between "dev," "beta," and "stable" update channels, and how you can test new versions. In this post, we'll be expanding on Mark Larson's earlier explanation of the update channel system.
Stable channel. As Mark outlines, the Stable channel is, well, stable. As a web developer, that means that as long as the major version — the "2" in "version 2.0.181.1" — doesn't change, you can count on Stable channel builds to use the same versions of WebKit (CSS, layout, etc.), V8 (JavaScript), and other components that might affect how a page loads or renders. Stable updates between major version releases are generally focused on addressing security issues, fixing egregious bugs, and improving stability. The big developer-facing bits of the browser won't change on the Stable channel until the next major version is released, and you can always preview upcoming changes using the Beta channel.
Beta channel. As a web developer, being on the Beta channel will ensure that you can test your sites with the next version of Google Chrome's rendering behavior before it's sent to the Stable channel and into the hands of most users. Whenever a major version lands in the Beta channel, the versions of WebKit, V8, networking, and the other systems that affect how web pages load and render generally become fixed. These versions may change during the major version's beta cycle, but changes are usually incremental fixes to help stabilize a feature rather than changes in behavior. New versions of WebKit may be introduced during a beta period, but those versions are always accompanied by a new build number (e.g. 2.0.169.xx vs. 2.0.172.xx) and are unlikely to differ drastically. As this major version moves closer to a stable release, these kinds of changes become more and more infrequent. Since Google Chrome development moves so quickly, you should stay on the Beta channel to catch compatibility issues ahead of time.
Dev channel. The Dev channel is where the sausage gets made. Dev releases happen frequently, and they track what's happening upstream in WebKit, V8, and other relevant systems very closely. This means that changes that might affect rendering, performance, and layout are likely to occur on the Dev channel on a regular basis. We don't recommend that you install the Dev channel if you're looking to maintain site compatibility, since tracking breaking changes as they happen can be a major headache. You should be able to spot any problems early enough via the Beta channel.
Users are on the Stable channel by default. To get onto the Beta or Dev channel, follow these instructions. Once you change to a less stable channel, e.g. from Stable to Dev, there isn't a supported "downgrade" path. If you change from the less stable channel back to a more stable one, Google Chrome will simply stop updating until your new channel "catches up" with the installed build. To force an immediate downgrade, uninstall and reinstall using an appropriate installer. This may occasionally cause errors when your more stable (older) version tries to read the newer user data left over from the previous installation.
Once you have a copy of Google Chrome, you can test your site's compatibility. Google Doctype has a helpful FAQ on best practices for Google Chrome compatibility. In short, prefer object detection over userAgent string parsing; don't rely on pixel-accurate font and element sizes; declare your pages' encodings correctly; double check <object> and <embed> parameters; check for illegal markup; and avoid browser-specific CSS.
The Chromium and WebKit teams work hard to ensure compatibility with websites. If after reading the above you discover browser problems, please don't hesitate to file a bug. If a particular problem with your site occurs in both Google Chrome and a corresponding version of Safari, it may be due to a WebKit issue, which you can file in the WebKit bug tracker. If the problem only happens in Google Chrome, log an issue in the Chromium bug tracker.
Starting a sandbox involves a single call to sandbox_init() specifying which resources to block for a specific process. In our case we lock down the process pretty tightly. That means no network access, and very limited or no access to files and Mach ports.
When Chromium starts a renderer process, we open an IPC channel (a UNIX socketpair) back to the browser process before turning on the sandbox. Any resources a process owns before turning on the sandbox stay with the process, so this channel can still be used after the sandbox is enabled. When we want to pass a shared memory area between processes, we send it over as an mmaped file handle using the sendmsg() API. We don't need to do anything else, as Apple's sandbox API is smart enough to allow access to file descriptors passed between processes in this manner even if the receiving process itself is forbidden from calling open().
One sticky point we run into is that the sandboxed process calls through to OS X system APIs. There is no documentation available about which privileges each API needs, such as whether they need access to on-disk files, or call other APIs to which the sandbox restricts access. Our approach to date has been to "warm up" any problematic API calls before turning the sandbox on. This means that we call through to the API, to allow it to cache whatever resource it needs. For example, color profiles and shared libraries can be loaded from disk before we "lock down" the process. To get a more complete understanding of our use of the sandbox in OSX, you can read the OSX sandboxing design doc.
As we continue the porting efforts for Chromium on the Mac, it's very satisfying to see the puzzle pieces fit into place alongside the native system APIs. It's important to us that the Mac port of Chromium feels and performs like a native Mac application, and that it provides the kind of high-quality experience Mac users expect.
The actions menu, visible in full-screen mode, will let you show speaker notes. We'll also post a video of the talk as soon as it's available.
As some of you know, it's already possible to write extensions using the latest developer build of Google Chrome. You can find out more about the system, and learn how to write your first extension, by reading our HOWTO document. We've really focused on making extensions as easy as possible to write, so you'll be up and running in no time.
We're still pretty early in the development of the extensions system, and we're constantly adding new features and tweaking the APIs based on your feedback. So if you try it out, we'd love to hear from you at [email protected].
Web applications are becoming more complex. With the increased complexity comes more JavaScript code and more objects. An increased number of objects puts additional stress on the memory management system of the JavaScript engine, which has to scale to deal efficiently with object allocation and reclamation. If engines do not scale to handle large object heaps, performance will suffer when running large web applications.
In browsers without a multi-process architecture, a simple way to see the effect of an increased working set on JavaScript performance is to log in to GMail in one tab and run JavaScript benchmarks in another. The objects from the two tabs are allocated in the same object heap and therefore the benchmarks are run with a working set that includes the GMail objects.
V8's approach to scalability is to use generational garbage collection. The main observation behind generational garbage collection is that most objects either die very young or are long-lived. There is no need to examine long-lived objects on every garbage collection because they are likely to still be alive. Introducing generations to the garbage collector allows it to only consider newly allocated objects on most garbage collections.
Splay: A Scalability Benchmark
To keep track of how well V8 scales to large object heaps, we have added a new benchmark, Splay, to version 4 of the V8 benchmark suite. The Splay benchmark builds a large splay tree and modifies it by creating new nodes, adding them to the tree, and removing old ones. The benchmark is based on a JavaScript log processing module used by the V8 profiler and it effectively measures how fast the JavaScript engine can allocate nodes and reclaim unused memory. Because of the way splay trees work, the engine also has to deal with a lot of changes to the large tree.
We have measured the impact of running the Splay benchmark with different splay tree sizes to test how well V8 performs when the working set is increased:
The graph shows that V8 scales well to large object heaps, and that increasing the working set by more than a factor of 7 leads to a performance drop of less than 17%. Even though 35 MB is more memory than most web applications use today, it is necessary to support such working sets to enable tomorrow's web applications.
Posted by Mads Ager and Kasper Lund, Software Engineers
Last Wednesday, 5 Chromium experts gave mini tech talks on subjects ranging from the network stack to hacking on WebKit. Armed with 2 video cameras, a microphone, and a whiteboard, we did the best we could to capture these talks and make them available to Chromium developers around the world. Whether you're a seasoned Chromium contributor or just getting started, I think these videos have a lot to offer.
Here's a rundown of the videos:
Darin Fisher talking about Chromium's multi-process architecture
Brett Wilson talking about the various layers of Chromium
O3D is still at an early stage and is not a part of the Chromium code base. However, we hope that, combined with projects like Mozilla's Canvas 3D, it will encourage the discussion within the graphics and web communities about a new open web standard on 3D graphics for the web. With JavaScript (and browsers) becoming faster every day, we believe it is the right time for such a standard to emerge. To help you participate in this broader discussion, Google has created a forum where you can submit suggestions on what features a 3D API for the web should have.