(resuming our testing on the toilet posts...)

In a previous episode, we extracted methods to simplify testing in Python. But if these extracted methods make the most sense as private class members, how can you write your production code so it doesn't depend on your test code? In Python this is easy; but in C++, testing private members requires more friend contortions than a game of Twister®.


// my_package/dashboard.h
class Dashboard {
private:
scoped_ptr<Database> database_; // instantiated in constructor

// Declaration of functions GetResults(), GetResultsFromCache(),
// GetResultsFromDatabase(), CountPassFail()

friend class DashboardTest; // one friend declaration per test
// fixture
};

You can apply the Extract Class and Extract Interface refactorings to create a new helper class containing the implementation. Forward declare the new interface in the .h of the original class, and have the original class hold a pointer to the interface. (This is similar to the Pimpl idiom.) You can distinguish between the public API and the implementation details by separating the headers into different subdirectories (/my_package/public/ and /my_package/ in this example):


// my_package/public/dashboard.h
class ResultsLog; // extracted helper interface
class Dashboard {
public:
explicit Dashboard(ResultsLog* results) : results_(results) { }
private:
scoped_ptr<ResultsLog> results_;
};

// my_package/results_log.h
class ResultsLog {
public:
// Declaration of functions GetResults(),
// GetResultsFromCache(),
// GetResultsFromDatabase(), CountPassFail()
};

// my_package/live_results_log.h
class LiveResultsLog : public ResultsLog {
public:
explicit LiveResultsLog(Database* database)
: database_(database) { }
};

Now you can test LiveResultsLog without resorting to friend declarations. This also enables you to inject MockResultsLog instance when testing the Dashboard class. The functionality is still private to the original class, and the use of a helper class results in smaller classes with better-defined responsibilities.

(resuming our testing on the toilet posts...)

In a previous episode, we extracted methods to simplify testing in Python. But if these extracted methods make the most sense as private class members, how can you write your production code so it doesn't depend on your test code? In Python this is easy; but in C++, testing private members requires more friend contortions than a game of Twister®.


// my_package/dashboard.h
class Dashboard {
private:
scoped_ptr<Database> database_; // instantiated in constructor

// Declaration of functions GetResults(), GetResultsFromCache(),
// GetResultsFromDatabase(), CountPassFail()

friend class DashboardTest; // one friend declaration per test
// fixture
};

You can apply the Extract Class and Extract Interface refactorings to create a new helper class containing the implementation. Forward declare the new interface in the .h of the original class, and have the original class hold a pointer to the interface. (This is similar to the Pimpl idiom.) You can distinguish between the public API and the implementation details by separating the headers into different subdirectories (/my_package/public/ and /my_package/ in this example):


// my_package/public/dashboard.h
class ResultsLog; // extracted helper interface
class Dashboard {
public:
explicit Dashboard(ResultsLog* results) : results_(results) { }
private:
scoped_ptr<ResultsLog> results_;
};

// my_package/results_log.h
class ResultsLog {
public:
// Declaration of functions GetResults(),
// GetResultsFromCache(),
// GetResultsFromDatabase(), CountPassFail()
};

// my_package/live_results_log.h
class LiveResultsLog : public ResultsLog {
public:
explicit LiveResultsLog(Database* database)
: database_(database) { }
};

Now you can test LiveResultsLog without resorting to friend declarations. This also enables you to inject MockResultsLog instance when testing the Dashboard class. The functionality is still private to the original class, and the use of a helper class results in smaller classes with better-defined responsibilities.

We thought you might be interested in another article from our internal monthly testing newsletter called CODE GREEN... Originally titled: "Opinion: But it works on my machine!"

We spent so much time hearing about "make your tests small and run fast." While this is important for quick CL verification, system level testing is important, too, and doesn't get enough air time.

You write cool features. You write lots and lots of unit tests to make sure your features work. You make sure the unit tests run as part of your project's continuous build. Yet when the QA engineer tries out a few user scenarios, she finds many defects. She logs them as bugs. You try to reproduce them, but ... you can't!

Sound familiar? It might to a lot of you who deal with complex systems that touch many other dependent systems. Want to test a simple service that just talks to a database? Simple, write a few unit tests with a mocked-out database. Want to test a service that connects to authentication to manage user accounts, talks to a risk engine, a biller, and a database? Now that's a different story!

So what are system level tests again?
System level tests to the rescue. They're also referred to as integration tests, scenario tests, and end-end tests. No matter what they're called, these tests are a vital part of any test strategy. They wait for screen responses, they punch in HTML form fields, they click on buttons and links, they verify text on the UI (sometimes in different languages and locales). Heck, sometimes they even poke open inboxes and verify email content!

But I have a gazillion unit tests and I don't need system level tests!
Sure you do. Unit tests are useful in helping you quickly decide whether your latest code changes haven't caused your existing code to regress. They are an invaluable part of the agile developers' tool kit. But when code is finally packaged and deployed, it could look and behave very differently. And no amount of unit tests can help you decide whether that awesome UI feature you designed works the way it was intended, or that one of the services your feature depended on is broken or not behaving as expected. If you think of a "testing diet," system level tests are like carbohydrates -- they are a crucial part of your diet, but only in the right amount!

System level tests provide that sense of comfort that everything works the way it should, when it lands in the customer's hands. In short, they're the closest thing to simulating your customers. And that makes them pretty darn valuable.

Wait a minute -- how stable are these tests?
Very good question. It should be pretty obvious that if you test a full blown deployment of any large, complex system you're going to run into some stability issues. Especially since large, complex systems consist of components that talk to many other components, sometimes asynchronously. And real world systems aren't perfect. Sometimes the database doesn't respond at all, sometimes the web server responds a few seconds later, and sometimes a simple confirmation message takes forever to reach an email inbox!

Automated system level tests are sensitive to such issues, and sometimes report false failures. The key is utilizing them effectively, quickly identifying and fixing false failures, and pairing them up with the right set of small, fast tests.

1 comment

1 comment

1 comment

1 comment

  • Al Snow - Form Letter Generator Technique

  • Chris McMahon – Emulating User Actions in Random and Deterministic Modes

  • Dave Liebreich – Test Mozilla

  • David Martinez – Tk-Acceptance

  • Dave W. Smith – System Effects of Slow Tests

  • Harry RobinsonExploratory Automation

  • Jason Reid – Not Trusting Your Developers

  • Jeff Brown – MBUnit

  • Jeff Fry – Generating Methods on the Fly

  • Keith Ray – ckr_spec

  • Kurman Karabukaev – Whitebox testing using Watir

  • Mark Striebeck – How to Get Developers and Tester to Work Closer Together

  • Sergio Pinon – UI testing + Cruise Control

There were also brainstorming exercises and discussions on the benefits that DT/TDs can bring to organizations and the challenges they face. Several participants have blogged about the Summit. The discussions continue at http://groups.google.com/group/td-dt-discuss.

If you spend your days coding and testing, try this opening exercise from the Summit. Imagine that:

T – – – – – – – – – – D

is a spectrum that has "Tester" at one end and "Developer" at the other. Where would you put yourself, and why?