By Patrick Copeland
1 comment


Understand your orgs release process and priorities
Late cycle pre-release testing is the most nerve racking part of the entire development cycle. Test managers have to strike a balance between doing the right testing and ensuring a harmonious release. I suggest attending all the dev meetings, but certainly as release approaches you shouldn't miss a single one. Pay close attention to their worries and concerns. Nightmare scenarios have a tendency to surface late in the process. Add test cases to your verification suite to ensure these scenarios won't happen.

The key here is to get late cycle pre-release testing right without any surprises. Developers can get skittish so make sure they understand your test plan going into the final push. The trick isn't to defer to development as to how to perform release testing but to make sure they are on-board with your plan. I find that at Google increasing the team's focus on manual testing is wholeheartedly welcomed by the dev team. Find your dev team's comfort zone and strike a balance between doing the right testing and making the final hours/days as wrinkle-free as possible.

Question your testing process
Start by reading every test case and reviewing all automation. Can you map these test cases back to the test plan? How many tests do you have per component? Per feature? If a bug is found outside the testing process did you create a test case for it? Do you have a process to fix or deprecate broken or outdated test cases?

As a test manager the completeness and thoroughness of the set of tests is your job. You may not be writing or running a lot of tests, but you should have them all in your head and be the first to spot gaps. It should be something a new manager tackles early and stays on top of at all times.

Look for ways to innovate
The easiest way to look good in the eyes of developers is to maintain the status quo. Many development managers appreciate a docile and subservient test team. Many of them like a predictable and easily understood testing practice. It's one less thing to worry about (even in the face of obvious inefficiencies the familiar path is often the most well worn).

As a new manager it is your job not to let them off so easy! You should make a list of the parts of the process that concern you and the parts that seem overly hard or inefficient. These are the places to apply innovation. Prepare for nervousness from the developer ranks, but do yourself and the industry a favor and place some bets for the long term.

There is no advice I have found universally applicable concerning how to best foster innovation. What works for me is to find the stars on your team and make sure they are working on something they can be passionate about. As a manager this is the single most important thing you can do to increase productivity and foster innovation.

I got this question in email this morning from a reader:

"I am a test supervisor at --- and was promoted to a QA management position yesterday. I'm excited and terrified, so I have been thinking about how to organize the thought in my mind. After attending StarWest and following your blog for a while now, I am very interested in your opinion.

If you were a brand new QA Manager, and you knew what you know now, what are the top 5-10 things you would focus on?"

I am flattered by the confidence but in the event it is misplaced I wanted to answer this question publicly and invite readers to chime in with their own experiences. Besides, I am curious as to other opinions because I live with this same excitement and terror every day and could use a little advice myself. Here's my first couple and I'll add some more in future posts (unless of course you guys beat me to it).

Start living with your product, get passionate about it
Drink your product's kool-aid, memorize the sales pitch, understand it's competitive advantages but retain your skepticism. Test/QA managers should be as passionate about the product as dev managers but we need to temper our passion with proof. Make sure the test team never stops testing the functionality represented by the sales pitch.

Furthermore, part of living with your product is being a user yourself. I now live without a laptop and exclusively use my Chrome OS Netbook for my day to day work. As people see me with it in the hallways, I get to recite its sales pitch many times every day. Great practice. I also get to live with its inadequacies and take note of the things it has yet to master. This is great discussion fodder with devs and other stakeholders and also forces me to consider competitive products. When I can't do something important on my Chrome OS Netbook, I have to use a competing product and this spawns healthy discussions about how users will perceive our product's downside and how we can truthfully communicate the pros and cons of our product to customers. Every day becomes a deep dive into my product as an actual user.

This is a great way to start off on a new product.

Really focus on the test plan, make it an early priority
If you are taking over an existing role as test manager for an existing product chances are that a test plan already exists and chances are that test plan is inadequate. I'm not being unkind to your predecessor here, I am just being truthful. Most test plans are transitory docs.

Now let me explain what I mean by that. Testers are quick to complain about inadequate design docs: that devs throw together a quick design doc or diagram but once they start coding, that design stagnates as the code takes on a life of its own. Soon the code does not match the design and the documentation is unreliable. If this is not your experience, congratulations but I find it far more the norm than design docs that are continually updated.

Testers love to complain about this. "How can I test a product without a full description of what the product does?" But don't we often do the same thing with respect to our test plans? We throw together a quick test plan but as we start writing test cases (automated or manual) they take on a life of their own. Soon the test cases diverge from the test plan as we chase new development and our experience develops new testing insight. The test plan has just become like the design docs: a has-been document.

You're a new test manager now, make fixing these documents one of your first priorities. You'll get to know your product's functionality and you'll see holes in the current test infrastructure that will need plugging. Plus, you'll have a basis to communicate with dev managers and show them you are taking quality seriously. Dev managers at Google love a good test plan, it gives them confidence you know what you are doing.

Coming up next:

Understand your orgs release process and priorities
Question your testing process
Look for ways to innovate




The open-source launch of Chrome OS was announced today, and the source is available to download and build http://www.chromium.org/chromium-os. The entire project, including testing, is being open-sourced and made available for scrutiny and to help others to both contribute and learn from our experiences.

The test engineering team haven't been idle - we're a small, international team and as a result we're having to be innovative in terms of our testing so we maximize our contribution to the project. We had two goals: to take care of short-term release quality and to plan an automation infrastructure that will serve the operating system for many years in the future.

Currently we're combining manual and automated testing to achieve these goals. The manual testing provides fast feedback while we're extending the use of test automation to optimize future testing. In terms of test automation, we're using a collection of open-source tools such as:
There are some interesting plans and ideas afoot on how to significantly increase the testability and accessibility of Chrome OS - watch for future blog posts on these topics in the coming months!

We have used various approaches to design our tests, including 'tours' (mentioned in various posts on this blog). We are also applying the concept of 'attack surface' used in security testing more generally to determine what to test, from both technical and functional perspectives.

For the launch we devised the 'early-adopters tour'; where we validated the open source build and installation instructions on a collection of netbooks purchased from local stores (we expect many of you will want to build and run Chrome OS on similar machines).

If you're one of the early adopters - have fun building, installing and running Chrome OS and post your comments and ideas here. We hope you enjoy using Chrome OS as much as we're enjoying testing it!



I appreciate James' offer to talk about how I have used the FedEx tour in Mobile Ads. Good timing too as I just found two more priority 0 bugs with the automation that the FedEx tour inspired! It was fun presenting this at STAR and I am pleased so many people attended.

Mobile has been a hard problem space for testing: a humongous browser, phone, capability combination which is changing fast as the underlying technology evolves. Add to this poor tool support for the mobile platform and the rapid evolution of the device and you'll understand why I am so interested in advice on how to do better test design. We've literally tried everything, from checking screenshots of Google's properties on mobile phones to treating the phone like a collection of client apps and automating them in the UI button-clicking traditional way.

Soon after James joined Google in May 2009, he started introducing the concept of tours, essentially making a point of "structured" exploratory testing. Tours presented a way for me to look at the testing problem in a radical new way. Traditionally, the strategy is simple, focus on the end user interaction, and verify the expected outputs from the system under test. Tours (at least for me) change this formula. They force the tester to focus on what the software does, isolating the different moving parts of software in execution, and isolating the different parts of the software at the component (and composition) level. Tours tell me to focus on testing the parts that drive the car, rather than on whether or not the car drives. This is somewhat counter intuitive I admit, that's why it is so important. The real value add of the tours comes from the fact that they guide me in testing those different parts and help me analyze how different capabilities inter-operate. Cars will always drive you off the lot, which part will break first is the real question.

I think testing a car is a good analogy. As a system it's devilishly complicated, hard to automate and hard to find the right combination of factors to make it fail. However, testing the dashboard can be automated; so can testing the flow of gasoline from the fuel tank to the engine and from there to the exhaust, so can lots of other capabilities. These automated point solutions can also be combined to test a bigger piece of the whole system. It's exactly what a mechanic does when trying to diagnose a problem: he employs different strategies for testing/checking each mechanical subsystem.

At STAR West, I spoke about evolving a good test strategy with the help of tours, specifically the FedEx tour. Briefly, the FedEx tour talks about tracking the movement of data and how it gets consumed and transformed by the system. It focuses on a very specific moving part, and as it turns out a crucial one for mobile.

James' FedEx tour tells me to identify and track data through my system. Identifying it is the easy part: the data comes from the Ads Database and is basically the information a user sees when the ad is rendered. When I followed it through the system, I noted three (and only three) places where the data is used (either manipulated or rendered for display). I found this to be true for all 10 local versions of the Mobile Ads application. The eureka moment for me was realizing that if I validated the data at those three points, I had little else to do in order to verify any specific localized version of an ad. Add all the languages you want, I'll be ready!

I was able to hook verification modules at each one of these three data inflection points. This basically meant validating data for the new Click-to-Call Ad parameters and locale specific phone number format. I was tracking how code is affecting the data at each stage, which also helps in localizing a bug better than other conventional means...I knew exactly where the failure was! For overcoming the location dependency, I mocked the GPS location parameters of the phone. As soon as I finished with the automation, I ran each ad in our database through each of the language versions verifying the integrity of the data. The only thing that was left was to visually verify rendering of the ads on the three platforms, reducing the manual tests to three (one each for Android, iPhone and Palm Pre).

The FedEx tour guided me to build a succinct piece of automation and turned what could have been a huge and error prone manual test into a reusable piece of automation that will find and localize bugs quickly. We're now looking at applying the FedEx tour across ads and in other client and cloud areas in the company. Hopefully there will be more experience reports from others who have found it useful.

Exploratory Testing ... it's not just for manual testers anymore!

4 comments

2 comments

6 comments


succeeds if value matches matcher. For example,
#include <gmock/gmock.h>
using ::testing::Contains;
...
EXPECT_THAT(GetUserList(), Contains(admin_id));

verifies that the result of GetUserList() contains the administrator.

Now, pretend the punctuations aren't there in the last C++ statement and read it. See what I mean?

Better yet, when an EXPECT_THAT assertion fails, it will print an informative message that includes the expression being validated, its value, and the property we expect it to have – thanks to a matcher's ability to describe itself in human-friendly language. Therefore, not only is the test code readable, the test output it generates is readable too. For instance, the above example might produce:
Value of: GetUserList()
Expected: contains "yoko"
  Actual: { "john", "paul", "george", "ringo" }

This message contains relevant information for diagnosing the problem, often without having to use a debugger.
To get the same effect without using a matcher, you'd have to write something like:
std::vector<std::string> users = GetUserList();
EXPECT_TRUE(VectorContains(users, admin_id))
    << " GetUserList() returns " << users
    << " and admin_id is " << admin_id;

which is harder to write and less clear than the one-liner we saw earlier.

Google C++ Mocking Framework (http://code.google.com/p/googlemock/) provides dozens of matchers for validating many kinds of values: numbers, strings, STL containers, structs, etc. They all produce friendly and informative messages. See http://code.google.com/p/googlemock/wiki/CheatSheet to learn more. If you cannot
find one that matches (pun intended) your need, you can either combine existing matchers, or define your own from scratch. Both are quite easy to do. We'll show you how in another episode. Stay tuned!

Share on Twitter Share on Facebook
2 comments


Once upon a time Java created an experiment called checked-exceptions, you know, you have to declare exceptions or catch them. Since that time, no other language (I know of) has decided to copy this idea, but somehow the Java developers are in love with checked exceptions. Here, I am going to "try" to convince you that checked-exceptions, even though look like a good idea at first glance, are actually not a good idea at all:

Empirical Evidence

Let's start with an observation of your code base. Look through your code and tell me what percentage of catch blocks do rethrow or print error? My guess is that it is in high 90s. I would go as far as 98% of catch blocks are meaningless, since they just print an error or rethrow the exception which will later be printed as an error. The reason for this is very simple. Most exceptions such as FileNotFoundException, IOException, and so on are sign that we as developers have missed a corner case. The exceptions are used as away of informing us that we, as developers, have messed up. So if we did not have checked exceptions, the exception would be throw and the main method would print it and we would be done with it (optionally we would catch all exceptions in the main log them if we are a server).

Checked exceptions force me to write catch blocks which are meaningless: more code, harder to read, and higher chance that I will mess up the rethrow logic and eat the exception.

Lost in Noise

Now lets look at the 2-5% of the catch blocks which are not rethrow and real interesting logic happens there. Those interesting bits of useful and important information is lost in the noise, since my eye has been trained to skim over the catch blocks. I would much rather have code where a catch would indicate: "pay, attention! here, something interesting is happening!", rather than, "it is just a rethrow." Now, if we did not have checked exceptions, you would write your code without catch blocks, test your code (you do test right?) and realize that under some circumstances an exception is throw and deal with it by writing the catch block. In such a case forgetting to write a catch block is no different than forgetting to write an else block of the if statement. We don't have checked ifs and yet no one misses them, so why do we need to tell developers that FileNotFound can happen. What if the developer knows for a fact that it can not happen since he has just placed the file there, and so such an exception would mean that your filesystem has just disappeared! (and your application is not place to handle that.)

Checked exception make me skim the catch blocks as most are just rethrows, making it likely that you will miss something important.

Unreachable Code

I love to write tests first and implement as a consequence of tests. In such a situation you should always have 100% coverage since you are only writing what the tests are asking for. But you don't! It is less than 100% because checked exceptions force you to write catch blocks which are impossible to execute. Check this code out:
bytesToString(byte[] bytes) {
 ByteArrayOutputStream out = new ByteArrayOutputStream();
 try {
   out.write(bytes);
   out.close()
   return out.toSring();
 } catch (IOException e) {
   // This can never happen!
   // Should I rethrow? Eat it? Print Error?
 }
}

ByteArrayOutputStream will never throw IOException! You can look through its implementation and see that it is true! So why are you making me catch a phantom exception which can never happen and which I can not write a test for? As a result I cannot claim 100% coverage because of things outside my control.

Checked exceptions create dead code which will never execute.

Closures Don't Like You

Java does not have closures but it has visitor pattern. Let me explain with concrete example. I was creating a custom class loader and need to override load() method on MyClassLoader which throws ClassNotFoundException under some circumstances. I use ASM library which allows me to inspect Java bytecodes. The way ASM works is that it is a visitor pattern, I write visitors and as ASM parses the bytecodes it calls specific methods on my visitor implementation. One of my visitors, as it is examining bytecodes, decides that things are not right and needs to throw a ClassNotFondException which the class loader contract says it should throw. But now we have a problem. What we have on a stack is MyClassLoader -> ASMLibrary -> MyVisitor. MyVisitor wants to throw an exception which MyClassLoader expects but it can not since ClassNotFoundException is checked and ASMLibrary does not declare it (nor should it). So I have to throw RuntimeClassNotFoundException from MyVisitor which can pass through ASMLibrary which MyClassLoader can then catch and rethrow as ClassNotFoundException.

Checked exception get in the way of functional programing.

Lost Fidelity

Suppose java.sql package would be implemented with useful exception such as SqlDuplicateKeyExceptions and SqlForeignKeyViolationException and so on (we can wish) and suppose these exceptions are checked (which they are). We say that the SQL package has high fidelity of exception since each exception is to a very specific problem. Now lets say we have the same set up as before where there is some other layer between us and the SQL package, that layer can either redeclare all of the exceptions, or more likely throw its own. Let's look at an example, Hibernate is object-relational-database-mapper, which means it converts your SQL rows into java objects. So on the stack you have MyApplication -> Hibernate -> SQL. Here Hibernate is trying hard to hide the fact that you are talking to SQL so it throws HibernateExceptions instead of SQLExceptions. And here lies the problem. Your code knows that there is SQL under Hibernate and so it could have handled SqlDuplicateKeyException in some useful way, such as showing an error to the user, but Hibernate was forced to catch the exception and rethrow it as generic HibernateException. We have gone from high fidelitySqlDuplicateKeyException to low fidelity HibernateException. An so MyApplication can not do anything. Now Hibernate could have throw HibernateDuplicateKeyException but that means that Hibernate now has the same exception hierarchy as SQL and we are duplicating effort and repeating ourselves.

Rethrowing checked exceptions causes you to lose fidelity and hence makes it less likely that you could do something useful with the exception later on.

You can't do Anything Anyway

In most cases when exception is throw there is no recovery. We show a generic error to the user and log an exception so that we con file a bug and make sure that that exception will not happen again. Since 90+% of the exception are bugs in our code and all we do is log, why are we forced to rethrow it over and over again.

It is rare that anything useful can be done when checked exception happens, in most case we die with error! Therefor I want that to be the default behavior of my code with no additional typing.

How I deal with the code

Here is my strategy to deal with checked exceptions in java:
Share on Twitter Share on Facebook
34 comments


The more uncertain events you have to consider, the higher measured entropy climbs. People often think of entropy as a measure of randomness: the more (uncertain) events one must consider, the more random the outcome becomes.

Testers introduce entropy into development by adding to the number of things a developer has to do. When developers are writing code, entropy is low. When we submit bugs, we increase entropy. Bugs divert their attention from coding. They must now progress in parallel on creating and fixing features. More bugs means more parallel tasks and raises entropy. This entropy is one reason that bugs foster more bugs ... the entropic principle ensures it. Entropy creates more entropy! Finally there is math to show what is intuitively appealing: that prevention beats a cure.

However, there is nothing we can do to completely prevent the plague of entropy other than create developers who never err. Since this is unlikely any time soon we must recognize how and when we are introducing entropy and do what we can to manage it. The more we can do during development the better. Helping out in code reviews, educating our developers about test plans, user scenarios and execution environments so they can code against them will reduce the number of bugs we have to report. Smoking out bugs early, submitting them in batches and making sure we submit only high quality bugs by triaging them ourselves will keep their mind on development. Writing good bug reports and quickly regressing fixes will keep their attention where it needs to be. In effect, it maximizes the certainty of the 'development event' and minimizes the number and impact of bugs. Entropy thus tends toward it's minimum.

We can't banish this plague but if we can recognize the introduction of entropy into development and understand its inevitable effect on code quality, we can keep it at bay.

Share on Twitter Share on Facebook
4 comments


I would like to make an analogy between building software and building a car. I know it is imperfect one, as one is about design and the other is about manufacturing, but indulge me, the lessons are very similar.

A piece of software is like a car. Lets say you would like to test a car, which you are in the process of designing, would you test is by driving it around and making modifications to it, or would you prove your design by testing each component separately? I think that testing all of the corner cases by driving the car around is very difficult, yes if the car drives you know that a lot of things must work (engine, transmission, electronics, etc), but if it does not work you have no idea where to look. However, there are some things which you will have very hard time reproducing in this end-to-end test. For example, it will be very hard for you to see if the car will be able to start in the extreme cold of the north pole, or if the engine will not overheat going full throttle up a sand dune in Sahara. I propose we take the engine out and simulate the load on it in a laboratory.

We call driving car around an end-to-end test and testing the engine in isolation a unit-test. With unit tests it is much easier to simulate failures and corner cases in a much more controlled environment. We need both tests, but I feel that most developers can only imagine the end-to-end tests.

But lets see how we could use the tests to design a transmission. But first, little terminology change, lets not call them test, but instead call them stories. They are stories because that is what they tell you about your design. My first story is that:

Given such a story I could easily create a test which would prove that the above story is true for any design submitted to me. What I would most likely get is a transmission which would only have a single gear in each direction. So lets write another story

Again I can write a test for such a transmission but i have not specified how the forward gear should be chosen, so such a transmission would most likely be permanently stuck in 1st gear and limit my speed, it will also over-rev the engine.

This is better, but my transmission would most likely rev the engine to maximum before it would switch, and once it would switch to higher gear and I would slow down, it would not down-shift.

OK, now it is starting to drive like a car, but still the limits for shifting really are 1000-6000 RPMs which is not very fuel efficient way to drive your car.

So now our engine will not rev any more but it will be a lazy car since once the transmission is in the fuel efficient mode it will not want to down-shift

I am not a transmission designer, but I think this is a decent start.

Notice how I focused on the end result of the transmission rather than on testing specific internals of it. The transmission designer would have a lot of levy in choosing how it worked internally, Once we would have something and we would test it in the real world we could augment these list of stories with additional stories as we discovered additional properties which we would like the transmission to posses.

If we would decide to change the internal design of the transmission for whatever reason we would have these stories as guides to make sure that we did not forget about anything. The stories represent assumptions which need to be true at all times. Over the lifetime of the component we can collect hundreds of stories which represent equal number of assumption which is built into the system.

Now imagine that a new designer comes on board and makes a design change which he believes will improve the responsiveness of the transmission, he can do so because the existing stories are not restrictive in how, only it what the outcome should be. The stories save the designer from breaking an existing assumption which was already designed into the transmission.

Now lets contrast this with how we would test the transmission if it would already be build.

It is hard now to think about what other tests to write, since we are not using the tests to drive the design. Now, lets say that someone now insist that we get 100% coverage, we open the transmission up and we see all kinds of logic, and rules and we don't know why since we were not part of the design so we write a test

Tests like that are not very useful when you want to change the design, since you are likely to break the test, without fully understanding why the test was testing that specific conditions, it is hard to know if anything was broken if the tests is red.. That is because the tests does not tell a story any more, it only asserts the current design. It is likely that such a test will be in the way when you will try to do design changes. The point I am trying to make is that there is huge difference between writing tests before or after. When we write tests before we are:

when we write tests after the fact we:

For this reason there are huge differences in quality when writing assumptions as stories before (which force design to emerge) or writing tests after which take a snapshot of a given design.
Share on Twitter Share on Facebook
8 comments

Sorry I haven't followed up on this, let the excuse parade begin: A) My new book just came out and I have spent a lot of time corresponding with readers. B) I have taken on leadership of some new projects including the testing of Chrome and Chrome OS (yes you will hear more about these projects right here in the future). C) I've gotten just short of 100 emails suggesting the 7th plague and that takes time to sort through.

This is clearly one plague-ridden industry (and, no, I am not talking about my book!)

I've thrown out many of them that deal with a specific organization or person who just doesn't take testing seriously enough. Things like the Plague of Apathy (suggested exactly 17 times!) just doesn't fit. This isn't an industry plague, it's a personal/group plague. If you don't care about quality, please do us all a favor and get out of the software business. Go screw someone else's industry up, we have enough organic problems we have to deal with. I also didn't put down the Plague of the Deluded Developer (suggested by various names 22 times) because it dealt with developers that as a Googler I no longer have to deal with ... those who think they never write bugs. Our developers know better and if I find out exactly where they purchased that clue I will forward the link.

Here's some of the best. As many of them have multiple suggesters I have credited the persons who were either first or gave the most thoughtful analysis. Feel free, if you are one of these people, to give further details or clarifications in the comments of this post as I am sure these summaries do not do them justice.

The Plague of Metrics (Nicole Klein, Curtis Pettit plus 18 others): Metrics change behavior and once a tester knows how the measurement works, they test to make themselves look good or say what they want it to say ignoring other more important factors. The metric becomes the goal instead of measuring progress. The distaste for metrics in many of these emails was palpable!

The Plague of Semantics (Chris LeMesurier plus 3 others): We misuse and overuse terms and people like to assign their own meaning to certain terms. It means that designs and specs are often misunderstood or misinterpreted. This was also called the plague of assumptions by other contributors.

The Plague of Infinity (Jarod Salamond, Radislav Vasilev and 14 others): The testing problem is so huge it's overwhelming. We spend so much time trying to justify our coverage and explain what we are and are not testing that it takes away from our focus on testing. Every time we take a look at the testing problem we see new risks and new things that need our attention. It randomizes us and stalls our progress. This was also called the plague of endlessness and exhaustion.

The Plague of Miscommunication (Scott White and 2 others): The language of creation (development) and the language of destruction (testing) are different. Testers write a bug report and the devs don't understand it and cycles have to be spent explaining and reexplaining. A related plague is the lack of communication that causes testers to redo work and tread over the same paths as unit tests, integration tests and even the tests that other testers on the team are performing. This was also called the plague of language (meaning lack of a common one).

The Plague of Rigidness (Roussi Roussev, Steven Woody, Michele Smith and 5 others): Sticking to the plan/process/procedure no matter what. Test strategy cannot be bottled in such a manner yet process heavy teams often ignore creativity for the sake of process. We stick with the same stale testing ideas product after product, release after release. This was also called the plague of complacency. Roussi suggested a novel twist calling this the success plague where complacency is brought about through success of the product. How can we be wrong when our software was so successful in the market?

And I have my own 7th Plague that I'll save for the next post. Unless anyone would like to write it for me? It's called the Plague of Entropy. A free book to the person who nails it.






Share on Twitter Share on Facebook
8 comments

1 comment


The plugin allows you to, from within Eclipse, start the JS Test Driver server, capture some browsers, and then run your tests. You get pretty icons telling you what browsers were captured, the state of the server, the state of the tests. It allows you to filter and show only failures, rerun your last launch configuration, even setup the paths to your browsers so you can launch it all from the safety of eclipse. And as you can see, its super fast. Some 100 odd tests in less than 10 ms. If thats not fast, I don’t know what is.

For more details on JS Test Driver, visit its Google Code website and see how you can use it in your next project and even integrate it into a continuous integration. Misko talks a little bit more about the motivations behind writing it on his Yet Another JS Testing Framework post. To try out the plugin for yourselves, go add the following update site to eclipse:

http://js-test-driver.googlecode.com/svn/update/

For all you IntelliJ fanatics, there is something similar in the works.
Share on Twitter Share on Facebook
1 comment

Yes, I only posted 6 plagues. Congratulations for catching this purposeful omission! You wouldn't trust a developer who argues "this doesn't need to be tested" or "that function works like so" and you shouldn't trust me when I say there are 7 plagues. In the world of testing all assumptions must be scrutinized and it doesn't work until someone, namely a tester, verifies that it does!

Clearly this is an alert and education readership. But why assume even this statement is true? How about another test? Anyone feel like contributing the 7th plague?

I've actually received a few via email already and I have an idea of my own 7th. So email them to me at [email protected] and I'll post a few of the best, with attribution, on this blog. Maybe I can even scare up some Google SWAG or a copy of my latest book to the best one.

First come, first published.


Share on Twitter Share on Facebook
4 comments

  • Model – the data model. May be shared between the client and server side, or if appropriate you might have a different model for the client side. It has no dependency on GWT.
  • View – the display. Classes in the view wrap GWT widgets, hiding them from the rest of your code. They contain no logic, no state, and are easy to mock.
  • Presenter – all the client side logic and state; it talks to the server and tells the view what to do. It uses RPC mechanisms from GWT but no widgets.

The Presenter, which contains all the interesting client-side code is fully testable in Java!

public void testRefreshPersonListButtonWasClicked() {
IMocksControl easyMockContext = EasyMock.createControl()
mockServer = easyMockContext.createMock(Server.class);
mockView = easyMockContext.createMock(View.class);
List franz = Lists.newArrayList(new Person("Franz", "Mayer"));
mockServer.getPersonList(AsyncCallbackSuccessMatcher<list<person>>reportSuccess(franz)));
mockView.clearPersonList());
mockView.addPerson(“Franz”, “Mayer”);

easyMockContext.replay();
presenter.refreshPersonListButtonClicked();
easyMockContext.verify();
}

Testing failure cases is now as easy as changing expectations. By swapping in the following expectations, the above test goes from testing success to testing that after two server failures, we show an error message.

mockServer.getPersonList(AsyncCallbackFailureMatcher<list<person>>reportFailure(failedExpn))
expectLastCall().times(2); // Ensure the presenter tries twice

mockView.showErrorMessage(“Sorry, please try again later”));

You'll still need an end-to-end test. But all your logic can be tested in small and fast tests.

The Source Code for the Matchers is open-sourced and can be downloaded here: AsyncCallbackSuccessMatcher.java - AsyncCallbackFailureMatcher.java.

Consider using Test Driven Development (TDD) to develop the presenter. It tends to result in higher test coverage, faster and more relevant tests, as well as a better code structure.


This week's episode by David Morgan, Christopher Semturs and Nicolas Wettstein based in Zürich, Switzerland – having a real Mountain View

AsyncCallbackSuccessMatcher.java

AsyncCallbackFailureMatcher.java.

Share on Twitter Share on Facebook
2 comments

19 comments

by Juergen Allgayer, Conference Chair

Testing for the Web is the theme of the 4th Google Test Automation Conference (GTAC), to be held in Zurich, October 21-22.

We are happy to announce that we are now accepting applications for attendance. The success of the conference depends on active participation of the attendees. Because the available spaces for the conference are limited, we ask each person to apply for attendance. Since we aim for a balanced audience of seasoned practitioners, students and academics, we ask the applicants to provide a brief background statement.


How to apply
Please visit
http://www.gtac.biz/call-for-attendance to apply for a attendance.


Deadline
Please submit your application until August 28, 2009 at the latest.


Registration Fees
There are no registration fees. We will send out detailed registration instructions to each invited applicant. We will provide breakfast and lunch. There will be a reception on the evening of October 21.


Cancellation
If you applied but can no longer attend the conference please notify us
immediately by sending an email to
[email protected] so
someone from the waiting list can get the opportunity instead.


Further information
General website:
http://www.gtac.biz/
Call for proposals:
http://www.gtac.biz/call-for-proposals
Call for attendance:
http://www.gtac.biz/call-for-attendance
Accommodations:
http://www.gtac.biz/accomodations

by Juergen Allgayer, Conference Chair

Testing for the Web is the theme of the 4th Google Test Automation Conference (GTAC), to be held in Zurich, October 21-22.

We are happy to announce that we are now accepting applications for attendance. The success of the conference depends on active participation of the attendees. Because the available spaces for the conference are limited, we ask each person to apply for attendance. Since we aim for a balanced audience of seasoned practitioners, students and academics, we ask the applicants to provide a brief background statement.


How to apply
Please visit
http://www.gtac.biz/call-for-attendance to apply for a attendance.


Deadline

Please submit your application until August 28, 2009 at the latest.


Registration Fees

There are no registration fees. We will send out detailed registration instructions to each invited applicant. We will provide breakfast and lunch. There will be a reception on the evening of October 21.


Cancellation

If you applied but can no longer attend the conference please notify us
immediately by sending an email to
gtac-2009-cfa@google.com so
someone from the waiting list can get the opportunity instead.


Further information
General website:
http://www.gtac.biz/
Call for proposals:
http://www.gtac.biz/call-for-proposals
Call for attendance:
http://www.gtac.biz/call-for-attendance
Accommodations:
http://www.gtac.biz/accomodations

1 comment