Measuring memory

If you're measuring memory in a multi-process application like Google Chrome, don't forget to take into account shared memory. If you add the size of each process via the Windows XP task manager, you'll be double counting the shared memory for each process. If there are a large number of processes, double-counting can account for 30-40% extra memory size.

To make it easy to summarize multi-process memory usage, Google Chrome provides the "about:memory" page which includes a detailed breakdown of Google Chrome's memory usage and also provides basic comparisons to other browsers that are running.

Multi-process Model Disadvantages

While the multi-process model provides clear robustness and performance benefits, it can also be a setback in terms of using the absolute smallest amount of memory. Since each tab is its own "sandboxed" process, tabs cannot share information easily. Any data structures needed for general rendering of web pages must be replicated to each tab. We've done our best to minimize this, but we have a lot more work to do.

Example: Try opening the browser with 10 different sites in 10 tabs. You will probably notice that Google Chrome uses significantly more memory than single-process browsers do for this case.

Keep in mind that we believe this is a good trade-off. For example; each tab has it's own JavaScript engine. An attack compromising one tab's Javascript engine is much less likely to be able to gain access to another tab (which may contain banking information) due to process separation. Operating systems vendors learned long ago that there are many benefits to not having all applications load into a single process space, despite the fact that multiple processes do incur overhead.

Multi-process advantages

Despite the setback, the multi-process model has advantages too. The primary advantage is the ability to partition memory for particular pages. So, when you close a page (tab), that partition of memory can be completely cleaned up. This is much more difficult to do in a single-process browser.

To demonstrate, lets expand on the example above. Now that you have 10 open tabs in a single process browser and Google Chrome, try closing 9 of them, and check the memory usage. Hopefully, this will demonstrate that Google Chrome is actually able to reclaim more memory than the single process browser generally can. We hope this is indicative of general user behavior, where many sites are visited on a daily basis; but when the user leaves a site, we want to cleanup everything.

You can find even more details in the design doc in our Chromium developer website.


How it works, and how much it helps.

First off, DNS Resolution is the translation of a domain name, such as www.google.com, into an IP address, such as 74.125.19.147. A user can't go anywhere on the internet until after the target domain is resolved via DNS.

The histograms at the end of this post show actual resolution times encountered when computers needed to contact their network for DNS resolutions. The data was gathered during our pre-release testing by Google employees who opted-in to contributing their results. As can be seen in that data, the average latency was generally around 250ms, and many resolutions took over 1 second, some even several seconds.

DNS prefetching just resolves domain names before a user tries to navigate, so that there will be no effective user delay due to DNS resolution. The most obvious example where prefetching can help is when a user is looking at a page with many links to various unexplored domains, such as a search results page. Google Chrome automatically scans the content of each rendered page looking for links, extracting the domain name from each link, and resolving each domain to an IP address. All this work is done in parallel with the user's reading of the page, hardly using any CPU power. When a user clicks on any of these pre-resolved names to visit a new domain, they will save an average of over 250ms in their navigation.  

If you've been running Google Chrome for a while, be sure to try typing "about:dns" into the address bar to see what savings you've accrued! Humorously, this prefetching feature often goes unnoticed, as users simply avoid the pain of waiting, and tend to think the network is just fast and smooth. To look at it another way, DNS prefetching removes the variance from surfing latency that is induced by DNS resolutions. (Note: If about:dns doesn't show any savings, then you probably are using a proxy, which is resolving DNS on the behalf of your browser.)

There are several other benefits that Google Chrome derives from DNS prefetching. During startup, it pre-resolves domain names, such as the home pages, very early in the startup process. This tends to save about 200-500 ms during application startups. Google Chrome also pre-resolves the host names in URLs suggested by the omnibox while the user is typing, but before they press enter. This feature works independently of the broader omnibox logic, and doesn't utilize any connection to Google. As a result, Google Chrome will generally navigate to a typed URL faster, or reach a user's search provider faster. Depending on the popularity of the target domain, this can save 100-250ms on average, and much more in the worst case.

If you are running Google Chrome, try typing "about:histograms/DNS.PrefetchFoundName" into the address bar to see details of the resolution times currently being encountered on your machine.

The bottom line to all this DNS prefetching is that Google Chrome works overtime, anticipating a user's needs, and making sure they have a very smooth surfing experience. Google Chrome doesn't just render and run Java Script at a remarkable speed, it gets users to their destinations quickly, and generally sidesteps the pitfalls surrounding DNS resolution time.

Of course, the best way to see this DNS prefetching feature work, is to just surf.  

Sample of DNS Resolutions Times requiring Network Activity (i.e., over 15ms resolution)

The following is a recent histogram of aggregated DNS resolutions times observed during tests of Google Chrome by Googlers, prior to the product's public release. The samples listed are only those that required network access (i.e., took more than 15 ms). The left column lists the lower range of each bucket.  For example, the first bucket lists samples between 14 and 18ms inclusive. The next three columns show the number of samples in that range, the fraction of samples in the range, and the cumulative fraction of samples at or below that range. For example, in the first bucket, there were 31761 samples in this bucket range, or about 5.10% of all the 6,228,600 samples shown. Looking at the cumulative percentage column (far right), we can see that the median resolution took around 90ms (actually, 52.71% took less than 118ms, but 43.63% took less than 87ms). Reading from the top of the chart, the average DNS resolution time was 271ms, and the standard deviation was 1.130 seconds. The "long tail" may have included users that lost network connectivity, and eventually reconnected, producing extraordinarily long resolution times.


Count: 6,228,600; Sum of times: 1,689,207,135; Mean: 271 ± 1130.67



1. Why use multiple processes in a browser?

In the days when most current browsers were designed, web pages were simple and had little or no active code in them.  It made sense for the browser to render all the pages you visited in the same process, to keep resource usage low.

Today, however, we've seen a major shift towards active web content, ranging from pages with lots of JavaScript and Flash to full-blown "web apps" like Gmail.  Large parts of these apps run inside the browser, just like normal applications run on an operating system.  Just like an operating system, the browser must keep these apps separate from each other.

On top of this, the parts of the browser that render HTML, JavaScript, and CSS have become extraordinarily complex over time.  These rendering engines frequently have bugs as they continue to evolve, and some of these bugs may cause the rendering engine to occasionally crash.  Also, rendering engines routinely face untrusted and even malicious code from the web, which may try to exploit these bugs to install malware on your computer.

In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security.  If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open.  Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive.  Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.

It doesn't have to be this way, though.  Web apps are designed to be run independently of each other in your browser, and they could be run in parallel.  They don't need much access to your disk or devices, either.  The security policy used throughout the web ensures this, so that you can visit most web pages without worrying about your data or your computer's safety.  This means that it's possible to more completely isolate web apps from each other in the browser without breaking them.  The same is true of browser plug-ins like Flash, which are loosely coupled with the browser and can be separated from it without much trouble.

Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself.  This means that a rendering engine crash in one web app won't affect the browser or other web apps.  It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won't lock up if a particular web app or plug-in stops responding.  It also means we can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.

Interestingly, using multiple processes means Google Chrome can have its own Task Manager (shown below), which you can get to by right clicking on the browser's title bar.  This Task Manager lets you track resource usage for each web app and plug-in, rather than for the entire browser.  It also lets you kill any web apps or plug-ins that have stopped responding, without having to restart the entire browser.


For all of these reasons, Google Chrome's multi-process architecture can help it be more robust, responsive, and secure than single process browsers.

2. What goes in each process?

Google Chrome creates three different types of processes: browser, renderers, and plug-ins.

Browser.  There's only one browser process, which manages the tabs, windows, and "chrome" of the browser.  This process also handles all interactions with the disk, network, user input, and display, but it makes no attempt to parse or render any content from the web.

Renderers.  The browser process creates many renderer processes, each responsible for rendering web pages.  The renderer processes contain all the complex logic for handling HTML, JavaScript, CSS, images, and so on.  We achieve this using the open source WebKit rendering engine, which is also used by Apple's Safari web browser.  Each renderer process is run in a sandbox, which means it has almost no direct access to your disk, network, or display.  All interactions with web apps, including user input events and screen painting, must go through the browser process.  This lets the browser process monitor the renderers for suspicious activity, killing them if it suspects an exploit has occurred.

Plug-ins.  The browser process also creates one process for each type of plug-in that is in use, such as Flash, Quicktime, or Adobe Reader.  These processes just contain the plug-ins themselves, along with some glue code to let them interact with the browser and renderers.

3. When should the browser create processes?

Once Google Chrome has created its browser process, it will generally create one renderer process for each instance of a web site you visit.  This approach aims to keep pages from different web sites isolated from each other.

You can think of this as using a different process for each tab in the browser, but allowing two tabs to share a process if they are related to each other and are showing the same site.  For example, if one tab opens another tab using JavaScript, or if you open a link to the same site in a new tab, the tabs will share a renderer process.  This lets the pages in these tabs communicate via JavaScript and share cached objects.  Conversely, if you type the URL of a different site into the location bar of a tab, we will swap in a new renderer process for the tab.

Compatibility with existing web pages is important for us.  For this reason, we define a web site as a registered domain name, like google.com or bbc.co.uk.  This means we'll consider sub-domains like mail.google.com and maps.google.com as part of the same site.  This is necessary because there are cases where tabs from different sub-domains may try to communicate with each other via JavaScript, so we want to keep them in the same renderer process.

There are a few caveats to this basic approach, however.  Your computer would start to slow down if we created too many processes, so we place a limit on the number of renderer processes that we create (20 in most cases).  Once we hit this limit, we'll start re-using the existing renderer processes for new tabs.  Thus, it's possible that the same renderer process could be used for more than one web site.  We also don't yet put cross-site frames in their own processes, and we don't yet swap a tab's renderer process for all types of cross-site navigations.  So far, we only swap a tab's process for navigations via the browser's "chrome," like the location bar or bookmarks.  Despite these caveats, Google Chrome will generally keep instances of different web sites isolated from each other in common usage.

For each type of plug-in, Google Chrome will create a plug-in process when you first visit a page that uses it.  A short time after you close all pages using a particular plug-in, we will destroy its process.

We'll post future blog entries as we refine our policies for creating and swapping among renderer processes.  In the mean time, we hope you see some of the benefits of a multi-process architecture when using Google Chrome.


The two main modules of Chromium are the browser process and the rendering engine.  The browser process has the same access to your computer that you do, so we try to reduce its attack surface by keeping it as simple as possible.  For example, the browser process does not attempt to understand HTML, JavaScript, or other complex parts of web pages.  The rendering engine does the heavy lifting: laying out web pages and running JavaScript.

To access your hard drive or the network, the rendering engine must go through the browser process, which checks to make sure the request looks legitimate.  In a sense, the browser process acts like a supervisor that double-checks that the rendering engine is acting appropriately.  The sandbox doesn't prevent every kind of attack (for example, it doesn't stop phishing or cross-site scripting), but it should make it harder for attackers to get to your files.

To see how well this architecture might mitigate future attacks, we studied recent vulnerabilities in web browsers.  We found that about 70% of the most serious vulnerabilities (those that let an attacker execute arbitrary code) were in the rendering engine.  Although "number of vulnerabilities" is not an ideal metric for evaluating security, these numbers do suggest that sandboxing the rendering engine is likely to help improve security.

To learn more, check out our technical report on Chromium's security architecture.