On behalf of the Committee and Google I want to thank all the speakers, attendees and volunteers who made this event a great professional engagement. Some moments of the conference are captured in
photos.

Looking forward to next year’s GTAC in the city of Google’s headquarters.

Happy Holidays.
Sujay Sahni for the GTAC 2010 Committee



Day 1

Welcome and Opening Remarks
Sujay Sahni, Google Inc. & GTAC Committee Chair
video slides

Day 1 Opening Keynote
What Testability Tells us About the Software Performance Envelope
Robert Victor Binder, Founder and CEO, mVerify
video slides abstract

Twist, a next generation functional testing tool for building and evolving test suites
Vivek Prahlad, ThoughtWorks
video slides abstract

The Future of Front-End Testing
Greg Dennis & Simon Stewart, Google Inc.
video slides abstract

Lightning Talks/Interactive Session
GTAC Attendees
video slides

Testivus on Testability
Alberto Savoia, Google Inc.
video slides

Lessons Learned from Testability Failures
Esteban Manchado Velázquez, Opera Software ASA
video slides abstract

Git Bisect and Testing
Christian Couder
video slides abstract

Flexible Design? Testable Design? You Don’t Have To Choose!
Russ Rufer & Tracy Bialik, Google Inc.
video slides abstract



Day 2

Day 2 Opening Keynote
Automatically Generating Test Data for Web Applications
Jeff Offutt, Professor of Software Engineering, Volgenau School of Information and Technology, George Mason University
video slides abstract

Early Test Feedback by Test Prioritisation
Shin Yoo, University College London & Robert Nilsson, Google Inc.
video slides abstract

Measuring and Monitoring Experience in Interactive Streaming Applications
Shreeshankar Chatterjee, Adobe Systems India
video slides abstract

Crowd Source Testing, Mozilla Community Style
Matt Evans, Mozilla
video slides abstract

Lightning Talks/Interactive Session
GTAC Attendees
video slides

Closing Keynote - Turning Quality on its Head
James Whittaker, Engineering Director, Google Inc.
video slides abstract

Closing Panel Discussion
GTAC Attendees
video

Closing Remarks
Sujay Sahni, Google Inc. & GTAC Committee Chair
video slides


What do you call a test that tests your application through its UI? An end-to-end test? A functional test? A system test? A selenium test? I’ve heard all them, and more. I reckon you have too. Tests running against less of the stack? The same equally frustrating inconsistency. Just what, exactly, is an integration test? A unit test? How do we name these things?

Gah!

It can be hard to persuade your own team to settle on a shared understanding of what each name actually means. The challenge increases when you encounter people from another team or project who are using different terms than you. More (less?) amusingly, you and that other team may be using the same term for different test types. “Oh! That kind of integration test?” Two teams separated by a common jargon.

Double gah!

The problem with naming test types is that the names tend to rely on a shared understanding of what a particular phrase means. That leaves plenty of room for fuzzy definitions and confusion. There has to be a better way. Personally, I like what we do here at Google and I thought I’d share that with you.

Googlers like to make decisions based on data, rather than just relying on gut instinct or something that can’t be measured and assessed. Over time we’ve come to agree on a set of data-driven naming conventions for our tests. We call them “Small”, “Medium” and “Large” tests. They differ like so:
FeatureSmallMediumLarge
Network accessNolocalhost onlyYes
DatabaseNoYesYes
File system accessNoYesYes
Use external systemsNoDiscouragedYes
Multiple threadsNoYesYes
Sleep statementsNoYesYes
System propertiesNoYesYes
Time limit (seconds)60300900+

Going into the pros and cons of each type of test is a whole other blog entry, but it should be obvious that each type of test fulfills a specific role. It should also be obvious that this doesn’t cover every possible type of test that might be run, but it certainly covers most of the major types that a project will run.

A Small test equates neatly to a unit test, a Large test to an end-to-end or system test and a Medium test to tests that ensure that two tiers in an application can communicate properly (often called an integration test).

The major advantage that these test definitions have is that it’s possible to get the tests to police these limits. For example, in Java it’s easy to install a security manager for use with a test suite (perhaps using @BeforeClass) that is configured for a particular test size and disallows certain activities. Because we use a simple Java annotation to indicate the size of the test (with no annotation meaning it’s a Small test as that’s the common case), it’s a breeze to collect all the tests of a particular size into a test suite.

We place other constraints, which are harder to define, around the tests. These include a requirement that tests can be run in any order (they frequently are!) which in turn means that tests need high isolation --- you can’t rely on some other test leaving data behind. That’s sometimes inconvenient, but it makes it significantly easier to run our tests in parallel. The end result: we can build test suites easily, and run them consistently and as as fast as possible.

Not “gah!” at all.

Can you spot the error in the following webpage?


Unless you are one of the
56 million Internet users who read Arabic, the answer is probably no. But BidiChecker, a tool for checking webpages for errors in handling of bidirectional text, can find it:


Oops! The Arabic movie title causes the line to be laid out in the wrong order, with half of the phrase "57 reviews" on one side of it and half on the other.

As this example demonstrates, text transposition errors can occur even if your web application is entirely in a left-to-right language. If the application accepts user input or displays multilingual content, this data may be in one of the right-to-left languages, such as Arabic, Hebrew, Farsi or Urdu. Displaying right-to-left text in a left-to-right environment, or vice versa, is likely to cause text garbling if not done correctly. So most user interfaces, whether left-to-right or right-to-left, need to be able to deal with bidirectional (BiDi) text.

Handling BiDi text can be tricky and requires special processing at every appearance of potentially BiDi data in the UI. As a result, BiDi text support often regresses when a developer adds a new feature–and fails to include BiDi support in the updated code.

Called from your automated test suite, BidiChecker can catch regressions before they go live. It features a pure JavaScript API which can easily be integrated into a test suite based on common JavaScript test frameworks such as
JSUnit. Here's a sample test for the above scenario:


// Check for BiDi errors with Arabic data in an English UI.
function testArabicDataEnglishUi() {



 // User reviews data to display; includes Arabic data.



 var reviewsData = [



 

 {'title': 'The Princess Bride', 'reviews': '23'},


 

 {'title': '20,000 Leagues Under the Sea', 'reviews': '17'},


 

 {'title': 'ستار تريك', 'reviews': '57'} // “Star Trek”



 ];





 // Render the reviews in an English UI.


 var app = new ReviewsApp(reviewsData, testDiv);


 app.setLanguage('English');



 app.render();







 // Run BidiChecker.



 var errors = bidichecker.checkPage(/* shouldBeRtl= */ false, testDiv);



 // This assertion will fail due to BiDi errors!



 assertArrayEquals([], errors);

}

We’ve just released BidiChecker as an open source project on Google Code, so web developers everywhere can take advantage of it. We hope it makes the web a friendlier place for users of right-to-left languages and the developers who support them.

By Jason Elbaum, Internationalization Team

Can you spot the error in the following webpage?


Unless you are one of the 56 million Internet users who read Arabic, the answer is probably no. But BidiChecker, a tool for checking webpages for errors in handling of bidirectional text, can find it:


Oops! The Arabic movie title causes the line to be laid out in the wrong order, with half of the phrase "57 reviews" on one side of it and half on the other.

As this example demonstrates, text transposition errors can occur even if your web application is entirely in a left-to-right language. If the application accepts user input or displays multilingual content, this data may be in one of the right-to-left languages, such as Arabic, Hebrew, Farsi or Urdu. Displaying right-to-left text in a left-to-right environment, or vice versa, is likely to cause text garbling if not done correctly. So most user interfaces, whether left-to-right or right-to-left, need to be able to deal with bidirectional (BiDi) text.

Handling BiDi text can be tricky and requires special processing at every appearance of potentially BiDi data in the UI. As a result, BiDi text support often regresses when a developer adds a new feature–and fails to include BiDi support in the updated code.

Called from your automated test suite, BidiChecker can catch regressions before they go live. It features a pure JavaScript API which can easily be integrated into a test suite based on common JavaScript test frameworks such as JSUnit. Here's a sample test for the above scenario:


// Check for BiDi errors with Arabic data in an English UI.
function testArabicDataEnglishUi() {



 // User reviews data to display; includes Arabic data.



 var reviewsData = [



 

 {'title': 'The Princess Bride', 'reviews': '23'},


 

 {'title': '20,000 Leagues Under the Sea', 'reviews': '17'},


 

 {'title': 'ستار تريك', 'reviews': '57'} // “Star Trek”



 ];





 // Render the reviews in an English UI.


 var app = new ReviewsApp(reviewsData, testDiv);


 app.setLanguage('English');



 app.render();







 // Run BidiChecker.



 var errors = bidichecker.checkPage(/* shouldBeRtl= */ false, testDiv);



 // This assertion will fail due to BiDi errors!



 assertArrayEquals([], errors);

}

We’ve just released BidiChecker as an open source project on Google Code, so web developers everywhere can take advantage of it. We hope it makes the web a friendlier place for users of right-to-left languages and the developers who support them.

By Jason Elbaum, Internationalization Team

1 comment


Category: Testing
  • Early Test Feedback by Test Prioritisation (Shin Yoo, University College London & Robert Nilsson, Google Inc.)
  • Crowd-source testing, Mozilla community style (Matt Evans, Mozilla)
  • Measuring and Monitoring Experience in Interactive Streaming Multimedia Web Applications (Shreeshankar Chatterjee, Adobe Systems India)
Category: Testability
  • Flexible Design? Testable Design? You Don’t Have To Choose! (Russ Rufer and Tracy Bialik, Google Inc.)
  • Git Bisect and Testing (Christian Couder)
  • Lessons Learned from Testability Failures (Esteban Manchado Velazquez, Opera Software ASA)
Category: Test Automation
  • The Future of Front-End Testing (Greg Dennis and Simon Stewart, Google Inc.)
  • Twist, a next generation functional testing tool for building and evolving test suites (Vivek Prahlad, ThoughtWorks)
For further information on the conference please visit its webpage at http://www.gtac.biz.

Sujay Sahni for the GTAC 2010 Committee
1 comment