Measuring web page load time is a notoriously tricky but importantendeavor. One of the most common challenges is simply getting a true start time. Historically, the earliest a web page could reliably begin measurement is when the browser begins to parse an HTML document (by marking a start time in a <script> block at the top of the document).
Unfortunately, that is too late to include a significant portion of the time web surfers spend waiting for the page: much of the time is spent fetching the page from the web server. To address this shortcoming, some clever web developers work around the problem by storing the navigation start time in a cookie during the previous page’s onbeforeunload handler. However, this doesn’t work for the critical first page load which likely has a cold cache.
Web Timing now gives developers the ability to measure the true page load time by including the time to request, generate, and receive the HTML document. The timeline below illustrates the metrics it provides. The vertical line labeled "Legacy navigation started" is the earliest time a web page can reliably measure without Web Timing. In this case, instead of a misleading 80ms load time, it is now possible to see that the user actually experienced a 274ms time. Including this missing phase will make your measurements appear to increase. It’s not because pages are getting slower – we’re just getting a better view on where the time is actually being spent.
Across other browsers: Web Timing metrics are under window.msPerformance in the third platform preview of Internet Explorer 9 and work is underway to add window.mozPerformance to Firefox. The specification is still being finalized, so expect slight changes before the browser prefixes are dropped. If you’re running a supported browser, please try the Web Timing demonstration and send us feedback.
So why the change? We have three fundamental goals in reducing the cycle time:
Shorten the release cycle and still get great features in front of users when they are ready
Make the schedule more predictable and easier to scope
Reduce the pressure on engineering to “make” a release
The first goal is fairly straightforward, given our pace of development. We have new features coming out all the time and do not want users to have to wait months before they can use them. While pace is important to us, we are all committed to maintaining high quality releases — if a feature is not ready, it will not ship in a stable release.
The second goal is about implementing good project management practice. Predictable fixed duration development periods allow us to determine how much work we can do in a fixed amount of time, and makes schedule communication simple. We basically wanted to operate more like trains leaving Grand Central Station (regularly scheduled and always on time), and less like taxis leaving the Bronx (ad hoc and unpredictable).
The third goal is about taking the pressure off software engineers to finish features in a single release cycle. Under the old model, when we faced a deadline with an incomplete feature, we had three options, all undesirable: (1) Engineers had to rush or work overtime to complete the feature by the deadline, (2) We delayed the release to complete that feature (which affected other un-related features), or (3) The feature was disabled and had to wait approximately 3 months for the next release. With the new schedule, if a given feature is not complete, it will simply ride on the the next release train when it’s ready. Since those trains come quickly and regularly (every six weeks), there is less stress.
So, get ready to see us pick up the pace and for new features to reach the stable channel more quickly. Since we are going to continue to increment our major versions with every new release (i.e. 6.0, 7.0, 8.0, 9.0) those numbers will start to move a little faster than before. Please don’t read too much into the pace of version number changes - they just mean we are moving through release cycles and we are geared up to get fresher releases into your hands!
We maintain a list of issued rewards on the Chromium security page. As the list indicates, a range of researchers have sent us some great bugs and the rewards are flowing! This list should also help answer questions about which sort of bugs might qualify for rewards.
Today, we are modifying the program in two ways:
The maximum reward for a single bug has been increased to $3,133.7. We will most likely use this amout for SecSeverity-Critical bugs in Chromium. The increased reward reflects the fact that the sandbox makes it harder to find bugs of this severity.
Whilst the base reward for less serious bugs remains at $500, the panel will consider rewarding more for high-quality bug reports. Factors indicating a high-quality bug report might include a careful test case reduction, an accurate analysis of root cause, or productive discussion towards resolution.
Thanks to everyone who contributes to Chromium security, and here’s looking forward to our first elite entrant!
UPDATE: We've had a few questions about whether we pay rewards in cases where the bug comes to us via a vulnerability broker. Bugs reported in this way are not likely to generate Chromium rewards. We encourage researchers to file bugs directly with us, as doing so helps us get started sooner on fixes and removes questions about who else may have access to the bug details. We'd also remind researchers that we don't necessarily require a working exploit in order to issue a reward. For example, evidence of memory corruption would typically be sufficient.