Is the beta channel considered stable enough to use every day?
Generally yes - but there may be hiccups along the way. We encourage beta channel users to regularly back up their important personal settings, like bookmarks, passwords, and more.


What’s the best way to provide feedback, report problems, etc.?
Chrome has three ways to provide feedback and report issues you may spot.
  1. Reporting feedback within Chrome. You can use the instructions found here to submit a feedback report within Chrome. We use these reports in aggregate to identify spikes and trends. We also perform deep dives when investigating issues, when checking in on new feature launches, and as part of regular user journey efforts.
  2. Troubleshooting an issue. Our official community forums are the best place to get help troubleshooting issues. We have a group of top contributors who also have the ability to escalate threads, which often help identify issues we aren’t seeing elsewhere.
  3. Reporting confirmed bugs on this project tracker. Once an issue is confirmed as a bug, you’re able to create a report to it on our public development tracker - straight to our team! You may find that your bug has already been reported (things move fast!), but this allows us to track a bug’s life cycle in an easy way.


Is there anything more beta than beta?
Yes! For some platforms, we also offer developer versions and even canary builds that are fresh off the test servers. These are generally only advised for folks that work on projects that need testing lead time. You can learn more about other release options here.


Today we have released version 6 of the V8 benchmark suite. The main changes are in the RegExp and Splay components of the benchmark suite. For reference, we describe each of the existing benchmarks in the suite below, along with any changes made in version 6.

RegExp: Regular expression benchmark generated by extracting regular expression operations from 50 of the most popular web pages. The regular expressions are exercised a number of times to reflect their popularity on those top 50 web pages. Changed in version 6: each regular expression is now exercised on a number of different input strings instead of just one.
Splay: Data manipulation benchmark that modifies a large splay tree to exercise the automatic memory management subsystem. The benchmark builds a large splay tree in a setup phase and then measures how fast nodes can be added and removed. Changed in version 6: no longer converts the same numeric key to string repeatedly and updates the splay tree in a way that increases the pressure on the memory management subsystem.

Richards: Operating system kernel simulation benchmark originally written in BCPL by Martin Richards. The Richards benchmark effectively measures how fast the JavaScript engine is at accessing object properties, calling functions, and dealing with polymorphism. It is a standard benchmark that has been successfully used to measure the performance of many modern programming language implementations.

DeltaBlue: One-way constraint solver, originally written in Smalltalk by John Maloney and Mario Wolczko. The DeltaBlue benchmark is written in an object-oriented style with a multi-level class hierarchy. As such it measures how fast the JavaScript engine is at running well-structured applications with many objects and small functions. Changed in version 6: fixed a couple of typos that do not have any impact on the behavior of the benchmark.

Crypto: Encryption and decryption benchmark based on code by Tom Wu. The benchmark encrypts an input string, decrypts the result and verifies that encryption followed by decryption yields the original input. The encryption/decryption algorithm is RSA and the benchmark measures the performance of arithmetic operations on integers and array access.

RayTrace: Ray tracer benchmark based on code by Adam Burmister. The benchmark measures floating-point computations where the object structure is constructed using the Prototype JavaScript library. Changed in version 6: removed dead code that has no impact on the behavior of the benchmark.

EarleyBoyer: Classic Scheme benchmarks, translated to JavaScript by Florian Loitsch's Scheme2Js compiler. The benchmarks exercise important areas of the JavaScript engine such as object allocation, data structure manipulation, and garbage collection. The translated nature of the benchmarks make them appear foreign, but the runtime characteristics of the benchmarks are highly representative of many real world web applications.

Curious to know how your browser performs? Give it a spin on the new version of the V8 benchmark suite.


So why the change? We have three fundamental goals in reducing the cycle time:
  • Shorten the release cycle and still get great features in front of users when they are ready
  • Make the schedule more predictable and easier to scope
  • Reduce the pressure on engineering to “make” a release
The first goal is fairly straightforward, given our pace of development. We have new features coming out all the time and do not want users to have to wait months before they can use them. While pace is important to us, we are all committed to maintaining high quality releases — if a feature is not ready, it will not ship in a stable release.

The second goal is about implementing good project management practice. Predictable fixed duration development periods allow us to determine how much work we can do in a fixed amount of time, and makes schedule communication simple. We basically wanted to operate more like trains leaving Grand Central Station (regularly scheduled and always on time), and less like taxis leaving the Bronx (ad hoc and unpredictable).

The third goal is about taking the pressure off software engineers to finish features in a single release cycle. Under the old model, when we faced a deadline with an incomplete feature, we had three options, all undesirable: (1) Engineers had to rush or work overtime to complete the feature by the deadline, (2) We delayed the release to complete that feature (which affected other un-related features), or (3) The feature was disabled and had to wait approximately 3 months for the next release. With the new schedule, if a given feature is not complete, it will simply ride on the the next release train when it’s ready. Since those trains come quickly and regularly (every six weeks), there is less stress.

So, get ready to see us pick up the pace and for new features to reach the stable channel more quickly. Since we are going to continue to increment our major versions with every new release (i.e. 6.0, 7.0, 8.0, 9.0) those numbers will start to move a little faster than before. Please don’t read too much into the pace of version number changes - they just mean we are moving through release cycles and we are geared up to get fresher releases into your hands!