Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 306 words

Continuing my review of http://microsoftnlayerapp.codeplex.com/, an Official Guidance (shows up at: http://msdn.microsoft.com/es-es/architecture/en) which people are expected to read and follow.

So the first thing that I saw when I opened the solution was this:

image

Yes, I know that this is a sample application, and still, this bit scares me. I have never really used the modeling support inside VS, so I took a look there, to see what is going on.

Take a look at one of those:

image

Do you see the problem here?

As usual with those sort of questions, it is an issue of what is not here. There is not a single mention of the actual application. To be frank, at this point, all I know is that it is a sample application, but I don’t know what the sample is about.

This is pretty scary, because I am pretty sure that if this is the case at that level, we are going to see much the same as we go deeper. I decided that it might behoove me to figure out what this application is about, so I checked the database project, and found:

image

Um… so it looks like it is about a bank, but also about books, and orders and software. I don’t really see how they connect right now, but we will see, I hope.

time to read 1 min | 123 words

I was forwarded this link http://microsoftnlayerapp.codeplex.com/ with a request to review it. I thought about skipping it until I saw:

image

The main problem here?

If it is a simple business scenario, it doesn’t need DDD.

Now, just to be clear on this sample app. It is not someone working on their free time, this is something that is Official Guidance (shows up at: http://msdn.microsoft.com/es-es/architecture/en) and people are expected to read and follow this.

I haven’t reviewed the code yet, but I am scared just by looking at the architecture diagram.

time to read 2 min | 283 words

One of the things that were itching me was the fact that it seems that not all the queries in RaccoonBlog were hitting the cache. Oh, it is more than fast enough, but I couldn’t really figure out what is going on there. Then again, it was never important enough for me to dig in.

I was busy doing the profiling stuff for RavenDB and I used RaccoonBlog as my testing ground, when I realized what the problem was:

image

Everything was working just fine, the problem was here:

image

Do you get the light bulb moment that I had? I was using Now in a query, and since Now by definition changes, we keep generating new queries, which can’t be cached, etc.

I changed all of the queries that contained Now to:

image

Which means that it would only use a different value every minute. Once I fixed that, I still saw that there was a caching problem, which led me to discover that there was an error in how we calculated etags for dynamic indexes after they have been promoted. Even a very basic profiling tool helped us fix two separate bugs (in Raccoon Blog and in RavenDB).

time to read 3 min | 439 words

RavenDB has the notion of HTTP caching out of the box, what this means is that by default, without you having to take any action, RavenDB will cache as much as possible for you. It can get away with doing this because it is utilizing the notion of the 304 Not Modified response. The second time that we load an entity or execute a query, we can simply ask the server whatever the data has been modified since the last time we saw it, and if it wasn’t, we can just skip executing the code and get the data directly from the cache.

This saves a lot in terms of bandwidth and processing power, it also means that we don’t have to worry about RavenDB’s caching returning any stale results, because we checked for that. It does mean, however, that we have to send a request to the server. There are situation where we want to squeeze even better performance from the system, and we can move to RavenDB’s aggressive caching mode.

Aggressive caching means that RavenDB won’t even ask the server whatever anything has changed, it will simply return the reply directly from the local cache if it is there. This means that you might get stale data, but it also means that you’ll get it fast.

You can activate this mode using:

using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5)))
{
    session.Load<User>("users/1");
}

Now, if there is a value in the cache for users/1 that is at most 5 minutes old, we can directly use that.

It also works on queries too:

using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5)))
{
    session.Query<User>().ToList();
}

This is an explicit step beyond the normal caching, and it does mean that you might get out of date information, but if you really want to reduce the number of remote calls, it is a really nice feature.

time to read 1 min | 156 words

One of the fun parts about RavenDB is that it will self optimize itself for you depending on how you are using your data.

With this blog, I decided when going live with RavenDB that I would not follow the best practices of ensuring static indexes for everything, but would let it figure it out on its own.

Today, I got curious and decided to check up on that:

image

What you see is pretty interesting.

  • The first three indexes were automatically created by RavenDB in response to queries made on the database.
  • The Raven/* indexes are created by RavenDB itself, for the Raven Studio.
  • The MapReduce indexes are for statistics on the blog, and are the only two that were actually created by the application explicitly.
time to read 3 min | 509 words

One of the most important things that I learned about working with software is that you have to design, from the get go, the debug ability of the system. This is important, because by default, here is how the your system reports its state:

To make things worse, in many cases, we are talking about things that are really hard to figure out. With Rhino Service Bus, I had started, from day one, to add a lot of debug logs, we had the idea of the logging endpoint, error messages went to the error queue along with the error that they generated, etc.

With NHibernate, it took me three days after starting to build NH Prof to figure out that NH Prof was a great tool to use when working on NHibernate itself.  With NH Prof, the actual file format that we used is an event stream, which is logged locally. Which means that the process of opening a saved file and listening to a running application is the same. That means that you can send me you File > Save output, and I can simulate locally the exact conditions that led to whatever problems you had.

As software becomes more and more complex, threading is involved, complex scenarios are introduced, multiple actors involved and more and more data is streaming in. The old patterns of debugging through a problem become less and less relevant. To put it simply, it is less and less possible to easily reconstruct a problem in most scenarios, at least, it is very hard to do from the information given.

Last week I started to add support for profiling RavenDB. The intent was mostly to allow us to build better integration with web applications, but during the process of investigating a very nasty bug, I figured out that I could use this profiling information to figure out what is going on. I did, and seeing the information, it was immediately obvious what was wrong.

I decided that I might was well go the whole hog there and introduced this:

image

No, it isn’t RavenDB Profiler (not yet), but it is a debug visualizer that you can use to look at the raw requests. It is a bit more than just a Fiddler clone, too, since it has awareness of things like RavenDB sessions, caching, aggressive caching, etc.

In the short time that this API is available, it has already help resolve at least two separate (and very hard to figure out) bugs.

I learned two things from the experience:

  • Having the debug hooks is critically important, because it means that you can access that information at need.
  • Just having the debug hooks is not enough, you have to provide a good way for them to be used.

In this case, it is a debugger visualizer, but we will have additional ways to expose this information to the users.

time to read 4 min | 737 words

From the get go, we have built RavenDB to be as easy to use as possible. Quite explicitly, wherever possible, we aimed to make it as simple as possible for someone familiar with .NET. The fun part is sometimes looking at people’s expectations and answering them is a pleasure, such as this one:

image

The reasons for this exchange being amusing is that RavenDB doesn’t have the concept of expensive queries. For the most parts, queries are merely scanning through an already computed index. RavenDB does no computation and very little work to answer your queries, so the notion of an expensive query is meaningless, there aren’t any.

That said, this behavior does comes at a cost, it means that we have to do the work of computing the indexes at some stage, and in our case, we have chosen to do so on the background. What that means, in turn, is the whole notion of queries potentially returning stale results. As it turned out, there is a actually a surprisingly few cases where you can’t tolerate some staleness in your application, and that makes RavenDB highly suited for a large number of scenarios.

There is, however, one common scenario that is annoying, the Create –> List flow. As a user, I want to complete the Create screen:

image

And then I want to see the List screen:

image

And naturally I expect to see the just created item in the list. That is the one scenario where people usually run into RavenDB’s consistency model, and it brought a few complaints every now and then. If the timing is just wrong, you may be able to issue the next request before the addition to the database had the chance to index, resulting in a missing update.

Usually, the advice was to add a WaitForNonStaleResultsAsOfNow(), which resolved this issue, so we didn’t really consider this any further. It was just one of those things that you had to understand about the way RavenDB worked.

Recently, however, we got a bug report for this exact scenario, and the user couldn’t just use WaitForNonStaleResultsAsOfNow(), or to be rather more accurate, he was already using it, but it wasn’t working. We eventually figured out that the problem was clock synchronization between the server and client computers. That forced us to re-consider how to approach this. After looking at several alternatives, we ended up creating a new consistency model for RavenDB.

In this consistency model, we are basically ensuring a very simple metric, “you can query anything that you have written”, instead of relying on the time being the same everywhere, we have decided to use the etags that are being reported back to the server. Using this approach, you can basically ignore the difference between RavenDB and the traditional Relational Database, since it will behave (externally), in much the same fashion.

You can enable this mode on a per query basis:

image

Or you can enable this for all queries:

image

I am pretty proud of this feature, since it simplify a lot for most users, and it provides a very effective (and simple) way to approach consistency. This is especially true if we are talking about multiple clients working on the same database (which is the case in the issue that was raised).

Whenever each client is writing, they will have to wait for their changes to be indexed, but they won’t have to wait for the changes from any other client. That matches the user’s expectation, and also allow RavenDB to answer most queries without doing any waiting whatsoever.

time to read 6 min | 1153 words

“We tried using NoSQL, but we are moving to Relational Databases because they are easier…”

That was the gist of a conversation that I had with a client. I wasn’t quite sure what was going on there, so I invited myself to their offices and took a peek at the code. Their actual scenario is classified, so we will use the standard blog model to show a similar example. In this case, we have there entities, the BlogPost, the User and the Comment. What they wanted is to ensure is that when a user is commenting on a blog post, it will update the comments’ count on the blog post, update the posted comments count on the user and insert the new comment.

The catch was that they wanted the entire thing to be atomic, to either happen completely or not at all. The other catch was that they were using MongoDB. The code looked something like this:

public ActionResult AddComment(string postId, string userId, Comment comment)
{
    int state = 0;
    var blogPost = database.GetCollection<BlogPost>("BlogPosts").FindOneById(postId);
    var user = database.GetCollection<User>("Users").FindOneById(userId);
    try
    {
        database.GetCollection<Comment>("Comments").Save(comment);
        state = 1;

        blogPost.CommentsCount++;
        database.GetCollection<BlogPost>("BlogPosts").Save(blogPost);
        state = 2;

        user.PostecCommentsCount++;
        database.GetCollection<User>("Users").Save(user);
        state = 3;


        return Json(new {CommentAdded = true});
    }
    catch (Exception)
    {
         // if (state == 0) //nothing happened yet, don't need to do anything

        if (state >= 1)
        {
            database.GetCollection<Comment>("Comments").Remove(Query.EQ("_id", comment.Id), RemoveFlags.Single);
        }
        if (state >= 2)
        {
            blogPost.CommentsCount--;
            database.GetCollection<BlogPost>("BlogPosts").Save(blogPost);
        }
        if (state >= 3)
        {
            user.PostecCommentsCount--;
            database.GetCollection<User>("Users").Save(user);
        }

        throw;
    }
}

Take a moment or two to go over the code and figure out what was going on in there. It took me a while to really figure that one out.

Important, before I continue with this post, I feel that I need to explain what the problem is and why it is there. Put simply, MongoDB doesn’t support multi document transactions. The reason that it doesn’t support multi document transactions is that the way MongoDB auto sharding works, different documents may be on different shards, therefor requiring synchronization between different machines, which no one has managed to make scalable an efficient. MongoDB choose, for reasons of scalability and performance, to not implement this feature. This is document and well know part of the product.

It makes absolute sense, except that it leads to code like the one above, when the user really do want to have atomic multi document writes. Just to be certain that the point has been hammered home. The code above still does not ensures atomic multi document writes. For example, if the server shuts down between immediately after setting state to 2, there is nothing that the code can do to revert the previous writes (after all, they can’t contact the server to tell it that it to revert them).

And there are other problems with this approach, the code is ugly, and it is extremely brittle. It is very easy to update one part and not the other… but at this point I think that I am busy explaining why horse excrement isn’t suitable for gourmet food.

The major problem with this code is that it is trying to do something that the underlying database doesn’t support. I sat down with the customer and talked about the advantages and disadvantages of staying with a document database vs. moving to a relational database. A relational database would handle atomic multi row writes easily, but would require many reads and many joins to show a single page.

That was the point where I put the disclaimer “I am speaking about my own product, and I am likely biased, be aware of that”.

The same code in RavenDB would be:

public ActionResult AddComment(string postId, string userId, Comment comment)
{
    using(var session = documentStore.OpenSession())
    {
        session.Save(comment);
        session.Load<BlogPost>(postId).CommentsCount++;
        session.Load<User>(userId).PostedCommentCount++;

        session.SaveChanges(); // Atomic, either all are saved or none are
    }
    
    return Json(new { CommentAdded = true });

}

There are a couple of things to note here:

  • RavenDB supports atomic multi document writes without anything required.
  • This isn’t the best RavenDB code, ideally I wouldn’t have to create the session here, but in the infrastructure, but you get the point.

We also support change tracking for loaded entities, so we didn’t even need to tell it to save the loaded instances. All in all, I also think that the code is prettier, easier to follow and would produce correct results in the case of an error.

time to read 2 min | 273 words

One of the most frustrating things in working with RavenDB is that the client API is compatible with the 3.5 framework. That means that for a lot of things we either have to use conditional compilation or we have to forgo using the new stuff in 4.0.

Case in point, we have the following issue:

image

The code in question currently looks like this:

image

This is the sort of code that simply begs to be used with ConcurrentDictionary. Unfortunately, we can’t use that here, because of the 3.5 limitation. Instead, I went with the usual, non thread safe, dictionary approach. I wanted to avoid locking, so I ended up with:

image

Pretty neat, even if I say so myself. The fun part that without any locking, this is completely thread safe. The field itself is initialized to an empty dictionary in the constructor, of course, but that is the only thing that is happening outside this method. For that matter, I didn’t even bother to make the field volatile. The only thing that this relies on is that pointer writes are atomic.

How comes this works, and what assumptions am I making that makes this thing possible?

time to read 4 min | 645 words

We run a contest to figure out what the next logo would be, and people came up with some really nice stuff! So nice that we are having trouble figuring out which to choose.

So we thought about asking you Smile

Here are the final options, you can click on each image to see it in full:

34 37 36

34

37

36

40 19 49

40

19

49

52 43  

52

43

 
29 42 41

29

42

41

     
28 45 46

28

45

46

We intend to use the new logo as the basis for a new website and probably some changes to the RavenDB Management Studio, so it is pretty important to us.

And now I can drop the “we” stuff and tell you that I am very excited by all the choices that we have, I am partial to the orange ones, I’ll admit, but that is a long standing weakness Smile

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 15 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}