Password managers improve security in two ways. First, they let users use more complex, harder-to-guess passwords because the password manager does the work of remembering them. Second, they help protect users from phishing pages (spoof pages that pretend to be from another site) by carefully scrutinizing the web page's URL before revealing the password. The key to the security of a password manager is the algorithm for deciding when to reveal passwords to the current web page. An algorithm that isn't strict enough can reveal users' passwords to compromised or malicious pages. On the other hand, an algorithm that's too strict won't function on some legitimate web sites. This may cause users to use more memorable (and less secure) passwords. Worse, users typically assume the browser is "broken," and become more willing to supply passwords to any page (including harmful ones), since they no longer trust the browser to make correct distinctions. The same side effects are possible if the password manager produces spurious warnings on legitimate sites; this simply trains users to ignore the warnings.
The password manager's algorithm is based on the browser's same-origin policy, which we've touched on before. The password manager supplies a password to a page only if the page is from the same origin (same scheme, host, and port) as the original page that saved the password. For example, this algorithm protects passwords from active network attackers by not revealing passwords saved on HTTPS pages to HTTP pages.
Because the same-origin policy does not distinguish between different paths, it's tempting to think that we could further improve security by requiring the paths to match as well; for example, passwords saved at https://example.com/login would not be sent to https://example.com/blog. However, this design works poorly with sites where users can log in from several places (like Facebook), as well as sites which store dynamically-generated state in the path. Furthermore, creating this "finer-grained" origin wouldn't actually improve security against compromised sites because other parts of the browser (like the JavaScript engine) still obey the same-origin policy. Imagine that example.com has a cross-site scripting vulnerability that lets an attacker inject malicious content into https://example.com/blog. An attacker would not need users to log in to this page; instead, the attacker could simply inject an <iframe> pointing to https://example.com/login and use JavaScript to read the password from that frame.
Besides checking the page hosting the password field, we can also check where password data is going to be sent when users submit their information. Consider a scenario that occurred a few years ago on a popular social networking site that let users (or in this case, attackers) customize their profile pages. At the time, an attacker could not include JavaScript on his profile page, but could still use malicious HTML — a password field set to send data back to the attacker's web server. When users viewed the attacker's profile, their password managers would automatically fill in their passwords because the profile page was part of the same origin as the site's login page. Lacking JavaScript, the attacker could not read these passwords immediately, but once the users clicked on the page, their data was sent to the attacker's server. Google Chrome defends against this subtle attack by checking the page to which the password data is submitted, once again using the same-origin policy. If this check fails, the password manager will not automatically fill in passwords when the page is loaded. The downside is that this can trip up legitimate web sites that dynamically generate their login URLs. To help users in both cases, the password manager waits for users to type their user names manually before filling in any passwords. At this point, if a page is really malicious, these users have most likely already fallen for the scam and would have proceeded to type in their passwords manually; continuing to refuse to fill in passwords would merely give the impression that the browser is "broken."
A number of other proposals to improve password manager security seem reasonable but don't actually make users more secure. For example, the password manager could refuse to supply passwords to invisible login fields, on the theory that legitimate sites have no need to do this and invisible fields are used only by attackers. Unfortunately, attackers trying to hide password fields from users can make the fields visible but only one pixel tall, or 99% transparent, hidden behind another part of the page, or simply scrolled to a position where users don't normally look. It is impossible for browsers to detect all the various ways password fields can be made difficult to notice, so blocking just one doesn't protect users. Plus, a legitimate site might hide the password field initially (similar to Washington Mutual), and if it does, the password manager wouldn't be able to fill in passwords for this site.
We've put a lot of thought into the password manager's design and carefully considered how to defend against a number of threats including phishing, cross-site scripting, and HTTPS certificate errors. By using the password manager, you can choose stronger, more complex passwords that are more difficult to remember. When the password manager refuses to automatically fill in your password, you should pause and consider whether you're viewing a spoof web site. We're also keen to improve the compatibility of the password manager. If you're having trouble using the password manager with your favorite site, consider filing a bug.
Posted by Adam Barth and Tim Steele, Software Engineers
Paweł is a computer science student at the University of Warsaw, and in his free time he's managed to write a ton of high-quality code towards making Chromium work on non-Windows platforms. Those of us who have worked on committing his near-daily patches are relieved to see that he'll now be able to commit them himself!
Google Chrome wouldn't be where it is today without your help. Of course, there's still a ton more to do, so keep the feedback, patches, bug reports, and moral support coming!
You receive an email message from an attacker containing a web page as an attachment, which you download.
You open the now-local web page in your browser.
The local web page creates an <iframe> whose source is https://mail.google.com/mail/.
Because you are logged in to Gmail, the frame loads the messages in your inbox.
The local web page reads the contents of the frame by using JavaScript to access frames[0].document.documentElement.innerHTML. (An Internet web page would not be able to perform this step because it would come from a non-Gmail origin; the same-origin policy would cause the read to fail.)
The local web page places the contents of your inbox into a <textarea> and submits the data via a form POST to the attacker's web server. Now the attacker has your inbox, which may be useful for spamming or identify theft.
There is nothing Gmail can do to defend itself from this attack. Accordingly, browsers prevent it by making various steps in the above scenario difficult or impossible. To design the best security policy for Google Chrome, we examined the security policies of a number of popular web browsers.
Safari 3.2. Local web pages in Safari 3.2 are powerful because they can read the contents of any web site (step 5 above succeeds). Safari protects its users by making it difficult for a web page from the Internet to navigate the browser to a local file (step 2 becomes harder). For example, if you click a hyperlink to a local file, Safari won't render the local web page. You have to manually type the file's URL into the location bar or otherwise open the file.
Internet Explorer 7. Like Safari 3.2, Internet Explorer 7 lets local web pages read arbitrary web sites, and stops web sites from providing hyperlinks to local files. Internet Explorer further mitigates local-file based attacks by stopping local web pages from running JavaScript by default (causing step 5 to fail). Internet Explorer lets users override this restriction by providing a yellow "infobar" that re-enables JavaScript.
Opera 9.6. Instead of letting local web pages read every web site, Opera 9.6 limits local web pages to reading pages in the local file system (step 5 fails because the <iframe>'s source is non-local). This policy mitigates the most serious attacks, but letting local web pages read local data can still be dangerous if your file system itself contains sensitive information. For example, if you prepare your tax return using your computer, your file system might contain last year's tax return. An attacker could use an attack like the one above to obtain this data.
Firefox 3. Like Opera, Firefox 3 blocks local web pages from reading Internet pages. Firefox further restricts a local web page to reading only files in the same directory, or a subdirectory. If you view a local web page stored in, say, a "Downloaded Files" directory, it won't be able to read files in "My Documents." Unfortunately, if the local web page is itself located in "My Documents", the page will be able to read your (possibly sensitive) documents.
When we designed Google Chrome's security policy for local web pages, we chose a design similar to that used by Opera. Google Chrome prevents users from clicking Internet-based hyperlinks to local web pages and blocks local web pages from reading the contents of arbitrary web sites. We chose not to disable JavaScript with an "infobar" override (like Internet Explorer) because most users do not understand the security implications of re-enabling JavaScript and simply re-enable it to make pages work correctly; even those that do understand frequently wish to override the warning, e.g. to develop web pages locally.
There is more to the security of local web pages than simply picking an access policy. A sizable number of users have more than one browser installed on their machine. In protecting our users, we also consider "blended" threats that involve more than one browser. For example, you might download a web page in Google Chrome and later open that page in Internet Explorer. To help secure this case, we attach the "mark of the web" to downloaded web pages. Internet Explorer then treats these pages as if they were on an "unknown" Internet site, which means they can run JavaScript but cannot access the local file system or pages on other Internet sites.
Other blended threats are also possible. Consider a user who uses Google Chrome but has Safari set as his or her default browser. When the user downloads a web page with Google Chrome, the page appears in the download tray at the bottom of the browser window. If the user clicks on the downloaded file, Google Chrome will launch the user's default browser (in this case Safari), and Safari will let the page read any web page, bypassing Safari's protections against step 2 in our hypothetical attack. Although this scenario is also possible in other browsers, downloading a file in those browsers requires more steps, making the vector less appealing to attackers. To mitigate this threat, we recently changed Google Chrome to require the user's confirmation when downloading web pages, just as we do for executable files.
In the future, we hope to further restrict the privileges of local web pages. We are considering several proposals, including implementing directory-based restrictions (similar to Firefox 3), or preventing local web pages from sending sensitive information back to Internet sites (blocking step 6 above), as proposed by Maciej Stachowiak of the WebKit project. Ultimately, we'd like to see all the browser vendors converge on a uniform, secure policy for local web pages.