There are two communities who regularly find bugs, the testers who are paid to find them and the users who stumble upon them quite by accident. Clearly the users aren’t doing so on purpose, but through the normal course of using the software to get work (or entertainment, socializing and so forth) done failures do occur. Often it is the magic combination of an application interacting with real user data on a real user’s computing environment that causes software to fail. Isn’t it obvious then that testers should endeavor to create such data and environmental conditions in the test lab in order to find these bugs before the software ships?
Actually, the test community has been diligently making an attempt to do just that for decades. I call this bringing the user into the test lab, either in body or in spirit. My own PhD dissertation was on the topic of statistical usage testing and I was nowhere near the first person to think of the idea as my multi-page bibliography will attest. But there is a natural limit to the success of such efforts. Testers simply cannot be users or simulate their actions in a realistic enough way to find all the important bugs. Unless you actually live in the software you will miss important issues.
It’s like homeownership. It doesn’t matter how well the house is built. It doesn’t matter how diligent the builder and the subcontractors are during the construction process. The house can be thoroughly inspected during every phase of construction by the contractor, the homeowner and the state building inspector. There are just some problems that will only be found once the house is occupied for some period of time. It needs to be used, dined in, slept in, showered in, cooked in, partied in, relaxed in and all the other things homeowners do in their houses. It’s not until the teenager takes an hour long shower while the sprinklers are running that the septic system is found deficient. It’s not until a car is parked in the garage overnight that we find out the rebar was left out of the concrete slab. The builder won't and often can't do these things.
And time matters as well. It takes a few months of blowing light bulbs at the rate of one every other week to discover the glitch in the wiring and a year has to pass before the nail heads begin protruding from the drywall. These are issues for the homeowner, not the builder. These are the software equivalents of memory leaks and data corruption, time is a necessary element in finding such problems.
These are some number of bugs that simply cannot be found until the house is lived in and software is no different. It needs to be in the hands of real users doing real work with real data in real environments. Those bugs are as inaccessible to testers as nail pops and missing rebar are to home builders.
Testers are homeless. We can do what we can do and nothing more. It’s good to understand our limitations and plan for the inevitable “punch lists” from our users. Pretending that once an application is released the project is over is simply wrong headed. There is a warranty period that we are overlooking and that period is still part of the testing phase.
Memory is supposed to be the first thing to go as one ages but in the grand scheme of engineering, software development can hardly be called elderly. Indeed, we’re downright young compared to civil, mechanical, electrical and other engineering disciplines. We cannot use age as an excuse for our amnesia.
There are two types of amnesia that plague software testers. We have team amnesia that causes us to forget our prior projects, our prior bugs, tests, failures and so forth. It’s time we developed a collective memory that will help us to stop repeating our mistakes. Every project is not a fresh start, it's only a new target for what is now a more experienced team. The star ship Enterprise keeps a ship's log. A diary that documents its crews’ adventures and that can be consulted for details that might help them out of some current jam. I'm not advocating a diary for test teams, but I do want a mechanism for knowledge retention. The point is that as a team we build on our collective knowledge and success. The longer the Enterprise crews' memory, the better their execution.
Quick, tell me about the last major failure of your teams’ product? Does your team have a collective memory of common bugs? Do you share good tests? If one person writes a test that verifies some functionality does everyone else know this so they can spend their time testing elsewhere? Do problems that break automation get documented so the painstaking analysis to fix them doesn’t have to be repeated? Does the team know what each other is doing so that their testing overlaps as little as possible? Is this accomplished organically with dashboards and ongoing communication or are the only sync points work-stopping, time-wasting meetings? Answer honestly. The first step to recovery is to admit you have a problem.
The second type of memory problem is industry amnesia. When I mentioned Boris Beizer and the pesticide paradox in my last post, how many of you had to look it up? And those who did know about it, how's your AJAX skills? Be honest...there are folks with both a historical perspective and modern skills but they are far too rare. Our knowledge, it seems, isn’t collective, it’s situational. Those who remember Beizer's insights worked in a world where AJAX didn’t exist, those who webspeak without trouble missed a lot of foundational thinking and wisdom. Remembering only what is in front of us is not really memory.
Industry amnesia is a real problem. Think about it this way: That testing problem in front of you right now (insert a problem that you are working on here) has been solved before. Are you testing an operating system? Someone has, many of them. Web app? Yep, that’s been done. AJAX? Client-server? Cloud service? Yes, yes and yes. Chances are that what you are currently doing has been done before. There are some new testing problems but chances are the one in front of you now isn’t one of them. Too bad the collective memory of the industry is so bad or it would be easy to reach out for help.
Let me end this column by pointing my finger inward: How will we (Google) test the newly announced Chrome operating system? How much collective knowledge have we developed from Chrome and Android? How much of what we learned testing Android will help? How much of it will be reusable? How easily will the Chrome and Andriod test teams adapt to this new challenge? Certainly many test problems are ones we've faced before.
The organization of the 4th Google Conference on Test Automation (GTAC 2009), held in Zurich October 21-22, is well underway and we are looking forward to an exciting event. We are pleased to announce the GTAC website and that presenters will be able to apply for sponsorship.
Sponsorship available One presenter per accepted proposal will be able to apply for sponsorship from Google. For sponsored presenters Google will arrange and pay travel and lodging (within reason). Further details will be announced to speakers of accepted proposals.
Website on-line (http://www.gtac.biz/) The conference website is now on-line with lots of information about the event, on how to submit proposals, about the venue, and much more.
Old habits die hard. Particularly when it comes to testing--once you get used to working in a project blanketed with lots of fast-running unit tests, it's hard to go back. So when some of us at Google learned that we weren't able to use our favorite mocking libraries when writing for our favorite mobile platform (Android, naturally), we decided to do something about it.
As many of you already know (since you read this blog), mocking combined with dependency injection is a valuable technique for helping you write small, focused unit tests. And those tests in turn can help you structure your code so that it's loosely coupled, making it more readable and maintainable. With a bit of setup, this can be done in Android, too. We've put together a tutorial describing our approach. It's the first installment in what we hope will be a series of articles on android app development and testing, from the perspective of Googlers who are not actually on the Android team. We'd love to hear whether you find it useful.