Google Munich

The office space is far from rigid, and teams often rearrange desks to suit their preferences. The facilities team recently finished renovating a new floor in the New York City office, and after a day of engineering debates on optimal arrangements and white board diagrams, the floor was completely transformed.

Besides the main office areas, there are lounge areas to which Googlers go for a change of scenery or a little peace and quiet. If you are trying to avoid becoming a casualty of The Great Foam Dart War, lounges are a great place to hide.

Google Dublin

Working with remote teams

Google’s worldwide headquarters is in Mountain View, CA, but it’s a very global company, and our project teams are often distributed across multiple sites. To help keep teams well connected, most of our conference rooms have video conferencing equipment. We make frequent use of this equipment for team meetings, presentations, and quick chats.

Google Boston

What’s at your desk?

All engineers get high-end machines and have easy access to data center machines for running large tasks. A new member on my team recently mentioned that his Google machine has 16 times the memory of the machine at his previous company.

Most Google code runs on Linux, so the majority of development is done on Linux workstations. However, those that work on client code for Windows, OS X, or mobile, develop on relevant OSes. For displays, each engineer has a choice of either two 24 inch monitors or one 30 inch monitor. We also get our choice of laptop, picking from various models of Chromebook, MacBook, or Linux. These come in handy when going to meetings, lounges, or working remotely.

Google Zurich

Thoughts?

We are interested to hear your thoughts on this topic. Do you prefer an open-office layout, cubicles, or private offices? Should test teams be embedded with development teams, or should they operate separately? Do the benefits of offering engineers high-end equipment outweigh the costs?




WebRTC calls transmit voice, video and data peer-to-peer, over the Internet.

But since this is an automated test, of course we could not have someone speak in a microphone every time the test runs; we had to feed in a known input file, so we had something to compare the recorded output audio against.

Could we duct-tape a small stereo to the mic and play our audio file on the stereo? That's not very maintainable or reliable, not to mention annoying for anyone in the vicinity. What about some kind of fake device driver which makes a microphone-like device appear on the device level? The problem with that is that it's hard to control a driver from the userspace test program. Also, the test will be more complex and flaky, and the driver interaction will not be portable.[1]

Instead, we chose to sidestep this problem. We used a solution where we load an audio file with WebAudio and play that straight into the peer connection through the WebAudio-PeerConnection integration. That way we start the playing of the file from the same renderer process as the call itself, which made it a lot easier to time the start and end of the file. We still needed to be careful to avoid playing the file too early or too late, so we don't clip the audio at the start or end - that would destroy our PESQ scores! - but it turned out to be a workable approach.[2]

Recording the Output
Alright, so now we could get a WebRTC call set up with a known audio file with decent control of when the file starts playing. Now we had to record the output. There are a number of possible solutions. The most end-to-end way is to straight up record what the system sends to default audio out (like speakers or headphones). Alternatively, we could write a hook in our application to dump our audio as late as possible, like when we're just about to send it to the sound card.

We went with the former. Our colleagues in the Chrome video stack team in Kirkland had already found that it's possible to configure a Windows or Linux machine to send the system's audio output (i.e. what plays on the speakers) to a virtual recording device. If we make that virtual recording device the default one, simply invoking SoundRecorder.exe and arecord respectively will record what the system is playing out.

They found this works well if one also uses the sox utility to eliminate silence around the actual audio content (recall we had some safety margins at both ends to ensure we record the whole input file as playing through the WebRTC call). We adopted the same approach, since it records what the user would hear, and yet uses only standard tools. This means we don't have to install additional software on the myriad machines that will run this test.[3]

Analyzing Audio
The only remaining step was to compare the silence-eliminated recording with the input file. When we first did this, we got a really bad score (like 2.0 out of 5.0, which means PESQ thinks it’s barely legible). This didn't seem to make sense, since both the input and recording sounded very similar. Turns out we didn’t think about the following:

  • We were comparing a full-band (24 kHz) input file to a wide-band (8 kHz) result (although both files were sampled at 48 kHz). This essentially amounted to a low pass filtering of the result file.
  • Both files were in stereo, but PESQ is only mono-aware.
  • The files were 32-bit, but the PESQ implementation is designed for 16 bits.

As you can see, it’s important to pay attention to what format arecord and SoundRecorder.exe records in, and make sure the input file is recorded in the same way. After correcting the input file and “rebasing”, we got the score up to about 4.0.[4]

Thus, we ended up with an automated test that runs continously on the torrent of Chrome change lists and protects WebRTC's ability to transmit sound. You can see the finished code here. With automated tests and cleverly chosen metrics you can protect against most regressions a user would notice. If your product includes video and audio handling, such a test is a great addition to your testing mix.


How the components of the test fit together.

Future work

  • It might be possible to write a Chrome extension which dumps the audio from Chrome to a file. That way we get a simpler-to-maintain and portable solution. It would be less end-to-end but more than worth it due to the simplified maintenance and setup. Also, the recording tools we use are not perfect and add some distortion, which makes the score less accurate.
  • There are other algorithms than PESQ to consider - for instance, POLQA is the successor to PESQ and is better at analyzing high-bandwidth audio signals.
  • We are working on a solution which will run this test under simulated network conditions. Simulated networks combined with this test is a really powerful way to test our behavior under various packet loss and delay scenarios and ensure we deliver a good experience to all our users, not just those with great broadband connections. Stay tuned for future articles on that topic!
  • Investigate feasibility of running this set-up on mobile devices.




1 It would be tolerable if the driver was just looping the input file, eliminating the need for the test to control the driver (i.e. the test doesn't have to tell the driver to start playing the file). This is actually what we do in the video quality test. It's a much better fit to take this approach on the video side since each recorded video frame is independent of the others. We can easily embed barcodes into each frame and evaluate them independently.

This seems much harder for audio. We could possibly do audio watermarking, or we could embed a kind of start marker (for instance, using DTMF tones) in the first two seconds of the input file and play the real content after that, and then do some fancy audio processing on the receiving end to figure out the start and end of the input audio. We chose not to pursue this approach due to its complexity.

2 Unfortunately, this also means we will not test the capturer path (which handles microphones, etc in WebRTC). This is an example of the frequent tradeoffs one has to do when designing an end-to-end test. Often we have to trade end-to-endness (how close the test is to the user experience) with robustness and simplicity of a test. It's not worth it to cover 5% more of the code if the test become unreliable or radically more expensive to maintain. Another example: A WebRTC call will generally involve two peers on different devices separated by the real-world internet. Writing such a test and making it reliable would be extremely difficult, so we make the test single-machine and hope we catch most of the bugs anyway.

3 It's important to keep the continuous build setup simple and the build machines easy to configure - otherwise you will inevitably pay a heavy price in maintenance when you try to scale your testing up.

4 When sending audio over the internet, we have to compress it since lossless audio consumes way too much bandwidth. WebRTC audio generally sounds great, but there's still compression artifacts if you listen closely (and, in fact, the recording tools are not perfect and add some distorsion as well). Given that this test is more about detecting regressions than measuring some absolute notion of quality, we'd like to downplay those artifacts. As our Kirkland colleagues found, one of the ways to do that is to "rebase" the input file. That means we start with a pristine recording, feed that through the WebRTC call and record what comes out on the other end. After manually verifying the quality, we use that as our input file for the actual test. In our case, it pushed our PESQ score up from 3 to about 4 (out of 5), which gives us a bit more sensitivity to regressions.





Monkey Tests
Monkey tests look for crashes and ANRs by stressing your Android application. They exercise pseudo-random events like clicks or gestures on the app under test. Monkey test results are reproducible to a certain extent; timing and latency are not completely under your control and can cause a test failure. Re-running the same monkey test against the same configuration will often reproduce these failures, though. If you run them daily against different SDKs, they are very effective at catching bugs earlier in the development cycle of a new release.

iOS Testing

Unit Tests
Unit test frameworks like OCUnit, which comes bundled with Xcode, or GTMSenTestcase are both good choices.

Hermetic UI Tests
KIF has proven to be a powerful solution for writing Objective-C UI tests. It runs in-process which allows tests to be more tightly coupled with the app under test, making the tests inherently more stable. KIF allows iOS developers to write tests using the same language as their application.

Following the same paradigm as Android UI tests, you want Objective-C tests to be hermetic. A good approach is to mock the server with pre-canned responses. Since KIF tests run in-process, responses can be built programmatically, making tests easier to maintain and more stable.

Monkey Tests
iOS has no equivalent native tool for writing monkey tests as Android does, however this type of test still adds value in iOS (e.g. we found 16 crashes in one of our recent Google+ releases). The Google+ team developed their own custom monkey testing framework, but there are also many third-party options available.

Backend Testing

A mobile testing strategy is not complete without testing the integration between server backends and mobile clients. This is especially true when the release cycles of the mobile clients and backends are very different. A replay test strategy can be very effective at preventing backends from breaking mobile clients. The theory behind this strategy is to simulate mobile clients by having a set of golden request and response files that are known to be correct. The replay test suite should then send golden requests to the backend server and assert that the response returned by the server matches the expected golden response. Since client/server responses are often not completely deterministic, you will need to utilize a diffing tool that can ignore expected differences.

To make this strategy successful you need a way to seed a repeatable data set on the backend and make all dependencies that are not relevant to your backend hermetic. Using in-memory servers with fake data or an RPC replay to external dependencies are good ways of achieving repeatable data sets and hermetic environments. Google+ mobile backend uses Guice for dependency injection, which allows us to easily swap out dependencies with fake implementations during testing and seed data fixtures.

Diagram: Normal flow vs Replay Tests flow


Conclusion

Mobile app testing can be very challenging, but building a comprehensive test strategy that understands the nature of different platforms and tools is the key to success. Providing a reliable and hermetic test environment is as important as the tests you write.

Finally, make sure you prioritize your automation efforts according to your team needs. This is how we prioritize on the Google+ team:
  1. Unit tests: These should be your first priority in either Android or iOS. They run fast and are less flaky than any other type of tests.
  2. Backend tests: Make sure your backend doesn’t break your mobile clients. Breakages are very likely to happen when the release cycle of mobile clients and backends are different.
  3. UI tests: These are slower by nature and flaky. They also take more time to write and maintain. Make sure you provide coverage for at least the critical paths of your app.
  4. Monkey tests: This is the final step to complete your mobile automation strategy.


Happy mobile testing from the Google+ team.

10 comments

21 comments

The Google Test Automation Conference (GTAC) was held last week in NYC on April 23rd & 24th. The theme for this year's conference was focused on Mobile and Media. We were fortunate to have a cross section of attendees and presenters from industry and academia. This year’s talks focused on trends we are seeing in industry combined with compelling talks on tools and infrastructure that can have a direct impact on our products. We believe we achieved a conference that was focused for engineers by engineers. GTAC 2013 demonstrated that there is a strong trend toward the emergence of test engineering as a computer science discipline across companies and academia alike.

All of the slides, video recordings, and photos are now available on the GTAC site. Thank you to all the speakers and attendees who made this event spectacular. We are already looking forward to the next GTAC. If you have suggestions for next year’s location or theme, please comment on this post. To receive GTAC updates, subscribe to the Google Testing Blog.

Here are some responses to GTAC 2013:

“My first GTAC, and one of the best conferences of any kind I've ever been to. The talks were consistently great and the chance to interact with so many experts from all over the map was priceless.” - Gareth Bowles, Netflix

“Adding my own thanks as a speaker (and consumer of the material, I learned a lot from the other speakers) -- this was amazingly well run, and had facilities that I've seen many larger conferences not provide. I got everything I wanted from attending and more!” - James Waldrop, Twitter

“This was a wonderful conference. I learned so much in two days and met some great people. Can't wait to get back to Denver and use all this newly acquired knowledge!” - Crystal Preston-Watson, Ping Identity

“GTAC is hands down the smoothest conference/event I've attended. Well done to Google and all involved.” - Alister Scott, ThoughtWorks

“Thanks and compliments for an amazingly brain activity spurring event. I returned very inspired. First day back at work and the first thing I am doing is looking into improving our build automation and speed (1 min is too long. We are not building that much, groovy is dynamic).” - Irina Muchnik, Zynx Health

Share on Twitter Share on Facebook
8 comments

No comments



The Google Students team hosted a Hangouts On Air event with several Google SETs (Diego Salas, Karin Lundberg, Jonathan Velasquez, Chaitali Narla, and Dave Chen) discussing the SET role.

Software Engineers in Test at Google - Covering your (Code)Bases



Interested in joining the ranks of TEs or SETs at Google? Search for Google test jobs.

Share on Twitter Share on Facebook
No comments

6 comments

No comments

Share on Twitter Share on Facebook
No comments

The three aspects that generally differentiate what a Test Engineer does day-to-day depend on the following:

Individual’s Strengths & Interests:
Everyone is different and every TE has different passions, strengths and areas of expertise. Thankfully, Google’s a big enough company that many different areas of testing are available, and we gravitate to testing products we like. All TEs start with core competencies in testing, coding, and algorithms. How a TE applies this knowledge varies.

The Type of Product:
Desktop, web app or mobile? Frontend or backend? The technologies that our products use and run on create a lot of variation in what and how we test.

The Product’s History / Lifecycle:
Early concept products don’t resemble those that exist in production. And the amount of testing that a product already has will determine what testing the TE is focused on. We work creating a test roadmap that parallels the product’s development cycle and addresses any testing gaps.

If you still want to know what the day in the life of a Test Engineer entails, we’ll never be able to give you a general answer for that. Instead I suggest that you check out what Alan Faulkner is doing or ask the next Google Test Engineer you meet.

Interested in joining the ranks of Test Engineers (or Software Engineers in Test)? Check out http://goo.gl/2RDKj

About the Contributors:
Albert Drona has been at Google for 5 years and is currently working on Google Maps for Mobile.
Jatin Shah has been at Google for 9 months on Google+.
Mohammad Khan has been at Google for 7 years and is currently working on Google+ releases.
About the Author:
Yvette Nameth has been at Google for 5 years and is currently working on Google Maps rendering.


Share on Twitter Share on Facebook
20 comments