Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 464 words

Daniel has posted a reply to my post, titling it:  Are you smart enough to do without TDD. I more or less expected to get responses like that, which was why I was hesitant to  post it. Contrary to popular opinion, I don’t really enjoy being controversial.

There are two main points that I object to in his post:

You see, Ayende appears to say that if you're smart enough, you'll just know what code to write, just like that. Ergo, if you don't know, maybe you're not that smart and hence you would need this technique for losers called Test Driven Design/Development.

That is not what I said, please don’t put words in my mouth. What I said was: “The idea behind TDD is to use the tests to drive the design. Well, in this case, I don’t have any design to drive.” Combine this with my concepts & features architecture, where the main tenets is: “A feature creation may not involve any design activity.” and it should be clear why TDD simply doesn’t work for my scenario.

And his attack on Rhino Mocks:

Moq vs Rhino Mocks: he [Ayende, it seems] read the (useless IMO) literature on mocks vs stubs vs fakes, had apparently a clear idea of what to do, and came up with Rhino's awkward, user unfriendly and hard to learn API with a myriad of concepts and options, and a record-replay-driven API (ok, I'm sure it was not his original idea, but certainly it's his impl.) which two years ago seemed to him to stand at the core of mocking. Nowadays not only he learned what I've been saying all along, that "dynamic, strict, partial and stub... No one cares", but also is planning to remove the record / playback API too.

This is just full of misinformation. Let me see how:

  • Rhino Mocks is 5 years old.
  • Rhino Mocks came out for .NET 1.0.
  • Rhino Mocks actually predate most of the mocks vs. stubs debate.

I keep Rhino Mocks updated as new concepts and syntax options comes. Yes, AAA is easier, but AAA relies on having the syntax options that we have in C# 3.0. Rhino Mocks didn’t start from there, it started a lot earlier, and it is a testament to its flexibility that I was able to adapt it to any change along the way.

Oh, and Rhino Mocks was developed with TDD, fully. Still is, for that matter. So I find it annoying that someone attacks it on this grounds without really understanding how it worked.

time to read 3 min | 479 words

Sometimes I read something and I just know that responding off the cuff would be a big mistake. Joel’s latest essay, duct tape programmers, is one such case.

I many ways, I feel that this and this says it all:

image

Sometimes I feel like Joel is on a quest to eradicate good design practices.

Let us start from where I do agree with him. Yes, some people have a tendency to overcomplicate things and code themselves into a corner. Yes, you should keep an eye on your deadlines and deliver.

But to go from there to disparage good practices? To actually encourage brute force hacking?

I think that Joel’s dream developer is the guy that keep copy/pasting stuff he finds on the web until it looks like it is working. At the very least, it will make sure that his bug tracking system is used.

And the examples that he gives?

Here’s what Zawinski says about Netscape: “It was decisions like not using C++ and not using threads that made us ship the product on time.”

imageOh, wait, let me see. Netscape is the company that:

  • Routinely shipped a browser that kept crashing
  • Wasn’t able to compete with IE
  • Got their source code into a bad enough shape that they had to rewrite it from scratch and lose 5 – 6 YEARS doing so
  • Collapsed

Yep, sounds like this duct tape notion really worked out for them, no?

Here is the deal, every tool and approach can be overused.

But that is part of being a professional, you have to how to balance things. I am not sure what bee got in Joel’s bonnet, but it sure seems to cause him to have a knee jerk reaction whenever good design principles are discussed.

Shipping software is easy, you can cobble together something that sort of works and you have a shipping product. People will even buy it from you. All you have to do is look around and see it.

The hard part is to keep releasing software, and with duct tape, your software will be taken away from you by software protective services.

image

Don’t, just don’t.

time to read 6 min | 1001 words

I originally titled this blog post: Separate the scenario under test from the asserts. I intentionally use the terminology scenario under test, instead of calling it class or method under test.

One of the main problem with unit testing is that we are torn between competing forces. One is the usual drive for abstraction and eradication of duplication, the second is clarity of the test itself. Karl Seguin does a good job covering that conflict.

I am dealing with the issue by the simple expedient of forbidding anything but asserts in the test method. And no, I don’t mean something like BDD, where the code under test is being setup in the constructor or the context initialization method.

I tend to divide my tests code into four distinct parts:

  • Scenario under test
  • Scenario executer
  • Test model, represent the state of the application
  • Test code itself, asserting the result of a specific scenario on the test model

The problem is that a single scenario in the application may very well have multiple things that we want to actually test. Let us take the example of authenticating a user, there are several things that happen during the process of authentication, such as the actual authentication, updating the last login date, resetting bad login attempts, updating usage statistics, etc.

I am going to write the code to test all of those scenarios first, and then discuss the roles of each item in the list. I think it will be clearer to discuss it when you have the code in front of you.

We will start with the scenarios:

public class LoginSuccessfully : IScenario
{
public void Execute(ScenarioContext context)
{
context.Login("my-user","swordfish is a bad password");
}
}

public class TryLoginWithBadPasswordTwice : IScenario
{
public void Execute(ScenarioContext context)
{
context.Login("my-user","bad pass");
context.Login("my-user","bad pass");
}
}

public class TryLoginWithBadPasswordTwiceThenTryWithRealPassword : IScenario
{
public void Execute(ScenarioContext context)
{
context.Login("my-user","bad pass");
context.Login("my-user","bad pass");
context.Login("my-user","swordfish is a bad password");
}
}

And a few tests that would show the common usage:

public class AuthenticationTests : ScenarioTests
{
[Fact]
public void WillUpdateLoginDateOnSuccessfulLogin()
{
ExecuteScenario<LoginSuccessfully>();

Assert.Equal(CurrentTime, model.CurrentUser.LastLogin);
}


[Fact]
public void WillNotUpdateLoginDateOnFailedLogin()
{
ExecuteScenario<TryLoginWithBadPasswordTwice>();

Assert.NotEqual(CurrentTime, model.CurrentUser.LastLogin);
}

[Fact]
public void WillUpdateBadLoginCountOnFailedLogin()
{
ExecuteScenario<TryLoginWithBadPasswordTwice>();

Assert.NotEqual(2, model.CurrentUser.BadLoginCount);
}

[Fact]
public void CanSuccessfullyLoginAfterTwoFailedAttempts()
{
ExecuteScenario<TryLoginWithBadPasswordTwiceThenTryWithRealPassword>();

Assert.True(model.CurrentUser.IsAuthenticated);
}
}

As you can see, each of the tests is pretty short and to the point, there is a clear distinction between what we are testing and what is being tested.

Each scenario represent some action in the system which we want to verify behavior for. Those are usually written with the help of a scenario context (or something of the like) with gives the scenario access to the application services required to perform its work. An alternative to the scenario context is to use a container in the tests and supply the application service implementations from there.

The executer (ExecuteScenario<TScenario>() method) is responsible for setting the environment for the scenario, executing the scenario, and cleaning up afterward. It is also responsible for any updates necessary to get the test model up to date.

The test model represent the state of the application after the scenario was executed. It is meant for the tests to be able to assert against. In many cases, you can use the actual model from the application, but there are cases where you would want to augment that with test specific items, to allow easier testing.

And the tests, well, the tests simple execute a scenario and assert on the result.

By abstracting the execution of a scenario into the executer (which rarely change) and providing an easy way of building scenarios, you can get very rapid feedback into test cycle while maintaining testing at a high level.

Also, relating to my previous post, note what we are testing here isn’t a single class. We are testing the system behavior in a given scenario. Note also that we usually want to assert on various aspects of a single scenario as well (such as in the WillNotUpdateLoginDateOnFailedLogin and WillUpdateBadLoginCountOnFailedLogin tests).

time to read 6 min | 1129 words

Let us get a few things out of the way:

  • I am not using TDD.
  • I am not using BDD.
  • I am not using Test After.
  • I am not ignoring testing.

I considered not posting this post, because of the likely response, but it is something that I think it worth at least discussion. The event that made me decide to post this is the following bug:

public bool IsValid
{
get { return string.IsNullOrEmpty(Url); }
}

As you can probably guess, I have an inverted conditional here. The real logic is that the filter is valid if the Url is not empty, not the other way around.

When I found the bug, I briefly considered writing a test for it, but it struck me as a bad decision. This is code that I don’t see any value in testing. It is too stupid to test, because I won’t have any ROI from the tests.  And yes, I am saying that after seeing that the first time I wrote the code it had a bug.

The idea behind TDD is to use the tests to drive the design. Well, in this case, I don’t have any design to drive. In recent years, I have moved away from the tenets of TDD toward a more system oriented testing system.

I don’t care about testing a specific class, I want to test the entire system as whole. I may switch some parts of the infrastructure (for example, change the DB to in memory one), for perf sake, but I usually try to test an entire component at a time.

My components may be as small as a single class or as big as the entire NH Prof sans the actual UI pieces.  I have posted in the past, showing how I implement features for NH Prof, including the full source code for the relevant sections. Please visit the link, it will probably make more sense to you afterward. It is usually faster, easier and more convenient to write a system test than to try to figure out how to write a unit test for the code.

Now, let us look at why people are writing tests:

  • Higher quality code
  • Safe from regressions
  • Drive design

Well, as I said, I really like tests, but my method of designing software is no longer tied to a particular class. I have the design of the class handed to me by a higher authority (the concept), so that is out. Regressions are handled quite nicely using the tests that I do write.

What about the parts when I am doing design, when I am working on a new concept?

Well, there are two problems here:

  • I usually try several things before I settle down on a final design. During this bit of churn, it is going to take longer to do things with tests.
  • After I have a design finalized, it is still easier to write a system level test than write unit tests for the particular implementation.

As a matter of fact, in many cases, I don’t really care about the implementation details of a feature, I just want to know that the feature works. As a good example, let us take a look at this test:

public class CanGetDurationOfQueries : IntegrationTestBase
{
[Fact]
public void QueriesSpecifyTheirDuration()
{
ExecuteScenarioInDifferentAppDomain<SelectBlogByIdUsingCriteria>();
var first = model.RecentStatements
.ExcludeTransactions()
.First();
Assert.NotNull(first.DurationViewModel.Inner.Value);

}
}

NH Prof went through three different ways of measuring the duration of a query. The test didn’t need to change. I have a lot of tests that work in the same manner. Specifying the final intent, rather than specifying each individual step.

There are some parts in which I would use Test First, usually parts that I have high degree of uncertainty about.  The “show rows from query” feature in NH Prof was develop using Test First, because I had absolutely no idea how to approach it.

But most of the time, I have a pretty good idea where I am and where I am going, and writing unit tests for every miniscule change is (for lack of a better phrase) hurting my style.

Just about any feature in NH Prof is covered in tests, and we are confident enough in our test coverage to release on every single commit.

But I think that even a test has got to justify its existence, and in many cases, I see people writing tests that have no real meaning. They duplicate the logic in a single class or method. But that isn’t what I usually care about. I don’t care about what a method or a class does.

I care about what the overall behavior is. And I shaped my tests to allow me to assert just that. I’ll admit that NH Prof is somewhat of a special case, since you have a more or less central location that you can navigate form which to everything else. In most systems, you don’t have something like that.

But the same principle remains, if you setup your test environment so you are testing the system, it is going to be much easier to test the system. It isn’t a circular argument. Let us take a simple example of an online shop and wanting to test the “email on order confirmed” feature.

One way of doing this would be to write a test saying that when the OrderConfirmed message arrive, a SendEmail message is sent. And another to verify that SendEmail message actually send an email.

I would rather write something like this, however:

[Fact]
public void WillSendEmailOnOrderConfirmation()
{
// setup the system using an in memory bus
// load all endpoints and activate them
// execute the given scenario
ExecuteScenario<BuyProductUsingValidCreditCard>();

var confirmation = model.EmailSender.EmailsToSend.FirstOrDefault(x=>x.Subject.Contains("Order Confirmation");
Assert.NotNull(confirmation);
}

I don’t care about implementation, I just care about what I want to assert.

But I think that I am getting side tracked to another subject, so I’ll stop here and post about separating asserts from their scenarios at another time.

time to read 2 min | 311 words

I had a short discussion with Steve Bohlen about distributed source control, and how it differs from centralized source control. After using Git for a while, I can tell you that there are several things that I am not really willing to give up.

  • Fast commits
  • Local history
  • Easy merging

To be sure, a centralized SCM will have commits, history and merging. But something like Git takes it to a whole new level. Looking at how it changed my workflow is startling. There is no delay to committing, so I can commit every minute or so. I could do it with SVN, but it would take 30 seconds to a minute to execute, blocking my work, so I use bigger commits with SVN.

Having local history means that I can deal with a lot of small commits, because diffing a file from two commits ago is as fast as diffing the local copy. I tend to browse around in the history quite a lot, especially when I am doing stuff like code reviews, or trying to look at how I did something three weeks ago.

Merging is another thing that DVCS excels at. Not so much because of better merge algorithms (although that is a part of this), but simply because having all the information locally make the merge process so much faster.

All in all, it end up being a much easier process to work with. It takes time to get used to it, though.

And given those requirements, Fast commits, Local history, Easy merging, you pretty much end up with a distributed solution. Even with a DVCS, you still have the master repository, but just the fact that you have full local history frees you from having to manage all of that.

time to read 1 min | 113 words

It isn’t just pair programming that is really useful. I had a problem that I found horrendously complicate to resolve, and I got on the phone with the rest of the team, trying to explain to them what I wanted and how I wanted to achieve that.

Luckily for me, they were too polite to tell me that I am being stop and that I should stop whining. But they were able to guide me toward an elegant solution in about fifteen minutes. Until at some point I had to say: “I really don’t understand why I thought this was hard.”

Getting feedback is important, be it on code or design.

time to read 2 min | 276 words

I needed to handle some task recently, so I sat down and did just that. It took me most of a day to get things working.

The problem was that it was a horrible implementation, and it was what you might call fragile:

image

I don’t usually have the patience to tolerate horrible code, so after I was done, I just reverted everything, without bothering to stash my work somewhere where I could retrieve it later. That sort of code is best kept lost.

Time lost: ~12 hours.

I talked with other team members about how to resolve the problem and they made me realize that there isn’t a great deal of difficulty in implementing that and that I am just being an idiot, as usual. With that insight, I spent maybe two hours in rebuilding the same functionality in a much more robust manner.

I could also reuse all my understanding on how things should behave, now that I knew all the places that needed to be touched.

Overall, it took me about 14 hours (spread over three days) to implement the feature. Scrapping out everything and starting from scratch really paid off, I invested 15% of the original development time, but I got a robust, working solution.

Trying to fix the previous implementation would have taken me significantly longer, and would result in a fragile bit of code that would likely need to be touched often.

time to read 1 min | 144 words

A few weeks ago I had to touch a part of NH Prof that is awkward to handle. It isn’t bad, it is just isn’t as smooth to work with as the rest of NH Prof.

I had the choice of spending more time there, making this easier to work with, or just dealing with the pain and making my change. Before touching anything, I looked at the commit log. The last commit that touched this piece of NH Prof was a long time ago.

That gave validity to my decision to just deal with the pain of changing it, because it wouldn’t be worthwhile to spend more time on this part of the application. Noticing areas of pain and fixing them is important, but I am willing to accept areas of pain in places that I need to touch twice yearly.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}