by Alberto Savoia I first posted this article a few years ago on the Artima Developer website; but the question of what's adequate code coverage keeps coming up, so I thought it was time for a repost of Testivus wisdom on the subject.
Testivus on Test Coverage Early one morning, a young programmer asked the great master:
“I am ready to write some unit tests. What code coverage should I aim for?”
The great master replied:
“Don’t worry about coverage, just write some good tests.”
The young programmer smiled, bowed, and left.
Later that day, a second programmer asked the same question. The great master pointed at a pot of boiling water and said:
“How many grains of rice should I put in that pot?”
The programmer, looking puzzled, replied:
“How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.”
“Exactly,” said the great master. The second programmer smiled, bowed, and left.
Toward the end of the day, a third programmer came and asked the same question about code coverage.
“Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table.
The third programmer smiled, bowed, and left.
After this last reply, a young apprentice approached the great master:
“Great master, today I overheard you answer the same question about code coverage with three different answers. Why?”
The great master stood up from his chair:
“Come get some fresh tea with me and let’s talk about it.”
After they filled their cups with smoking hot green tea, the great master began:
“The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later. The second programmer, on the other hand, is quite experienced both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.”
“I see,” said the young apprentice, “but if there is no single simple answer, then why did you tell the third programmer ‘Eighty percent and no less’?” The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down.
“The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.”
The young apprentice and the grizzled great master finished drinking their tea in contemplative silence.
Note: Apparently, there were lots of downloads of the Testivus booklet and I hit some kind of quota on my personal account. If you have problems with reaching the original link below, please try this new download link or this one.
A major topic at this year's GTAC conference is going to be testability: "We also want to highlight methodologies and tools that can be used to build testability into our products." That's great!
Testability is one of the most important, yet overlooked, attributes of code – and one that is not discussed enough. That's unfortunate, because by the time the issue of testability comes up in a project it's usually too late. As preparation and seeding for GTAC, I though it would be fun and useful to get some discussions on testability going. So here we go, feel free to chime in with your thoughts.
A few years ago, after watching one too many episodes of Kung Fu, I was inspired to write a pretentious and cryptic little booklet about testing called "The Way of Testivus" (PDF).
Testivus addresses the issue of testability in a few places, but I would like to start the discussion with this maxim:
To me, "Think of code and tests as one" is the very foundation of testability. If you don't think about testing as you design and implement your code, you are very likely to make choices that will impair testability when the time comes. This position seemed obvious and non-controversial to me at the time I wrote it, and I still stand by it. Most people seem to agree with it as well, and more than one person told me that it's their favorite and most applicable maxim from all of Testivus. There are however three groups of people who found issue with it.
Some of the people, mostly from the TDD camp, think that my choice of words leaves too much wiggle room: "Thinking about the tests is not enough, they should be writing and running those tests at the same time."
Others think that code and tests should not be thought of as one at all, but they should be treated independently – ideally as adversaries: "I don't want code and tests to be too "friendly". Production code should not be changed or compromised to make the testing easier, and tests should not trust the hooks put in the code to make it more testable." Most of the people in this camp are not big fans of unit/developer testing in the first place, but not all. One person, a believer in developer testing, told me that he gets the best results with a Dr. Jekyll and Mr. Hyde approach. He assumes two different roles and personalities based on whether he's coding or testing his own code. When coding, he's the constructive Dr. Jekyll who focuses on elegant and efficient design and algorithms – and does not worry about testability. When testing, he turns into the destructive Mr. Hyde; he tries to forget that it's his code or how he implemented it, and puts all his energy and anger into trying to break it. Sounds like it could work quite well – though I don't think I'd want this person as an office mate during the Mr. Hyde phase.
A third group, thought that the maxim was fine for unit tests, but not applicable to other types of tests that were best served by an adversarial black-box approach.
What are your thoughts? Is it enough to think about testability when designing or writing the code, or must you actually write and run some tests in parallel with the code? Does anyone agree with the position that code and tests should be designed and developed in isolation? Are there other Dr. Jekylls and Mr. Hydes out there?