Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 231 words

I have taught NHibernate both before and after NH Prof was available. I have to say, there is absolutely no way that I can compare the two.

NH Prof, I know that I am not suppose to say that, but I love you. I really love you!

image

I am saying this after spending 4 days doing intensive NHibernate stuff, a full 3 days course and NHibernate consulting day. And NH Prof made it so much easier that I cannot really describe.

Participants in my course can testify how at several points I just stared at the profiler in shock, not believing the breadth of information that it gave me. More specifically, error detection can be a true godsend in many cases. But just being able to flip between the code and what is going on is invaluable to explain what is going on. And today I had the chance to use it as a detective tool, trying to figure out what exactly is causing a page to issue hundreds of requests. The Stack Trace feature was invaluable in tracking down exactly what is going on.

Just for that alone, it was worth all the time, effort and money that I put into it.

time to read 1 min | 182 words

One of the challenges that DDD advocates face when using an OR/M is the usage of Coarse Grained Locks on Aggregate Roots, I’ll leave the discussion of you would want to do that to Fowler and Evans, but it seems that a lot of people run into a lot of problems with this.

It is actually not complicated at all, all you have to do is call:

session.Lock(person, LockMode.Force);

This will make NHibernate issue the following statement:

image

This allow us to update the version of the aggregate even if it wasn’t actually changed.

Of course, if you wanted a physical lock, you can do that as well:

session.Lock(person, LockMode.Upgrade);

Which would result in:

image

Pretty easy, even if I say so myself.

time to read 2 min | 321 words

One of the more common problems that I see over and over again in many applications is that test databases are too fast. As I have shown, this can easily lead to really sever cases of chattiness with the database, and all while the developer is totally oblivious.

This is especially true when you develop against a local database with almost no data, and deploy to a network database with lots of data. SELECT N+1 is bad enough, but when N is in the hundreds or more, it gets bad. The main problem is that developers aren’t really aware of that. This don’t see the problem, or feel it. And while they could fix it if they caught it in time, trying to come back to an existing application and fix all the many places where they assumed database access is free is a daunting task.

Therefore, I have set out to solve the problem. Obviously it is a problem with the developers not paying attention, but how can we deal with that?

Well, you could buy the NHibernate Profiler, which is my official recommendation. Or, if you don’t feel like spending money on this, you can utilize the following interceptor. That will make the developers sit up and notice when they start talking to the database.

public class SlowDownDudeInterceptor : EmptyInterceptor
{
    public override SqlString OnPrepareStatement(SqlString sql)
    {
        return sql.Insert(0, "waitfor delay '0:0:0.5'" + Environment.NewLine);
    }
}

Don’t go to production with this!

time to read 5 min | 958 words

When I finished reading this post I let out a heavy sigh. It is not going to work. Basically, the EF is going the same way that NHibernate was in NHibernate 1.0 (circa 2005!).

Let me show you how. in the post, the example given is:

public class Category
{
    public int CategoryID { get; set; }
    public string CategoryName { get; set; }
    public virtual List<Product> Products { get; set; }
    ...
}

This looks like it would work, right? The way the EF is doing this is by creating a proxy of your class (similar to the way that NHibernate is doing that) and intercepting the call to get_Products.

It has a few problems, however. Let us start from the most obvious one, program to interface not to implementation. Exposing List<T> means that you have no control whatsoever about what is going on with the collection. It also means that they have to intercept the access to the property, not using the collection. That is bad, it means that they are going to have to do eager loading in all too many cases where NHibernate can just ignore it.

Let us more on a bit, and talk about the inability to support any interesting scenario. If we look at collections with NHibernate, we can take this:

Console.WriteLine(category.Products.Count);

To this SQL:

select count(*) from Products
where CategoryId = 1

With the approach that the EF uses, they will have to load the entire collection.

But all of those are just optimizations, not the make-or-break for the feature. Here is a common scenario that is going to break it:

public class Category
{
    private List<Product> _products;
    public int CategoryID { get; set; }
    public string CategoryName { get; set; }
    public virtual List<Product> Products { get { return _products; } set { _products = value; } }

    public void AddProduct(Product p)
    {
        // do something interesting here
        _products.Add(p);
    }
    ...
}

There is a reason why the default proxies for NHibernate force all members of the entities to be virtual. It is not just because we think everything should be virtual (it should, but that is not a discussion for now). It is all about being able to allow the user to use a real POCO.

In the scenario outlined above, what do you think is going to happen?

AddProduct is a non virtual method call, so it cannot be intercepted. Accessing the _products field also cannot be intercepted.

The end result is a NullReferenceException that will seem to appear randomly, based on whatever something else touched the Property or not. And however nice auto properties are, there are plenty of reason to use fields. Using auto properties just mask the problem, but it is still there.

Oh, and if we want to use our POCO class with EF, forget about the new operator, you have to use:

Category category = context.CreateObject<Category>();

Hm, this just breaks just about anything that relies on creating new classes. Want to use your entity as parameter for ASP.Net MVC action, to be automatically bounded? Forget about it. You have to create your instances where you have access to the context, and that is a database specific thing, not something that you want to have all over the place.

And I really liked this as well:

The standard POCO entities we have talked about until now rely on snapshot based change tracking – i.e. the Entity Framework will maintain snapshots of before values and relationships of the entities so that they can be compared with current values later during Save. However, this comparison is relatively expensive when compared to the way change tracking works with EntityObject based entities.

Hm, this is how NHibernate and Hibernate have always worked. Somehow, I don’t see this showing up as a problem very often.

Final thought, why is the EF calling it Defer Loading? Everyone else call it lazy loading.

time to read 4 min | 712 words

This is a new feature of NHibernate that Fabio has recently ported. Using the same model that I have talked about before:

image

With the following schema:

image

The feature is basically this, NHibernate can now execute set based operation on your model. This include all Data Modification Language operations, so we are talking about Update, Insert and Delete. Let us make things a bit interesting and talk about the following statement, which hopefully will make things clearer:

s.CreateQuery("update Owner o set o.Name = 'a' where o.Name = 'b'")
    .ExecuteUpdate();

Executing this code will make NHibernate execute the following SQL statements:

image

image

image

image

image

image

As you can see, we have executed a very simple query against the model, which translate to a fairly complex data model (needing to update three separate tables. We can also see that we are doing significant effort to maintain the illusion of a single query (that is why we need the temp table here).

But we are not limited to just updates, we can also do deletes:

s.CreateQuery("delete Owner o where o.Name = 'b'")
    .ExecuteUpdate();

Which result in:

image

image

image

image

image

image

I think that by now you are already familiar with the pattern :-)

As for insert statements, they are supported as well, but there are some limitations. In particular, you have to use an identifiers generation strategy that can generate identifiers in the database (sequence or identity), and there are some limitation on how you can make this work in several complex hierarchies. Using a simple Table Per Class, the following HQL works:

s.CreateQuery("insert into Individual (Name, Email) select i.Name, i.Email from Individual i")
    .ExecuteUpdate();

And generates:

image

All in all, this is a really awesome feature.

Thanks, Fabio.

time to read 1 min | 171 words

Why do we have to do something like that?
var blogs = s.CreateCriteria<Blog>()
    .Add(Restrictions.Eq("Title", "Ayende @ Rahien"))
    .List<Blog>();

We have to specify Blog twice, isn’t that redundant?

Yes and no. The problem is that we can’t assume that we are going to return instances of Blog. For example:

var blogs = s.CreateCriteria<Blog>()
    .Add(Restrictions.Eq("Title", "Ayende @ Rahien"))
    .SetProjection(Projections.Id())
    .List<int>();
time to read 5 min | 805 words

This is a story about a bug that frustrated, annoyed, and nearly drove me mad. It also cost me ridiculous amount of time. The problem? The communication from the profiled application to the profiler with NH Prof is done using a Protocol Buffers network stream. The problem? It kept failing. Now, networks are unreliable, but they are not that unreliable, especially since all my tests are focused on local machines scenario.

I tried quite hard to create a very reliable system, but I was growing extremely frustrated, it failed, sometimes, in a very unpredictable manner, and in ways that looked like the entire idea is broken. To add insult to injury, any isolated test that I run worked perfectly. Out of ideas and nearly out of of my mind, I turned to the old adage: “If it doesn’t work, kick it, if it still doesn’t work, kicker it harder” and created a stress test that basically run several of the problematic tests in a loop until I could finally reproduce it in a more or less consistent fashion.

It was fairly clear that I am getting a lot of errors when reading the data, and for a while, I focused on that, thinking that I wasn’t handling the connection properly. But even after I did a lot of work on hardening the connection code, it still failed quite often. Now it didn’t fail on reads, it failed on missing data that the tests expected to be there.

After a while I started concentrating on the writing side, I changed the code to write the data to a file, rather than a network stream. At that point, I was pretty sure that my file system was working properly, and I intended to do analyses of the file as I load it. Imagine my surprise when I found out that the file, too, was corrupt. That made me abandon the reading-cause-the-error scenario, and cause a total focus on the writing part. After a while, I managed to get a clue, in the form of the following error:

System.IndexOutOfRangeException: Index was outside the bounds of the array.
   at Google.ProtocolBuffers.CodedOutputStream.WriteRawByte(Byte value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 530
   at Google.ProtocolBuffers.CodedOutputStream.WriteRawByte(UInt32 value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 534
   at Google.ProtocolBuffers.CodedOutputStream.SlowWriteRawVarint32(UInt32 value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 464
   at Google.ProtocolBuffers.CodedOutputStream.WriteRawVarint32(UInt32 value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 480
   at Google.ProtocolBuffers.CodedOutputStream.WriteTag(Int32 fieldNumber, WireType type) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 458
   at Google.ProtocolBuffers.CodedOutputStream.WriteMessage(Int32 fieldNumber, IMessage value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 220
   at HibernatingRhinos.NHibernate.Profiler.Appender.Messages.LoggingEventMessage.Types.StackTraceInfo.WriteTo(CodedOutputStream output) in C:\NHProf\HibernatingRhinos.NHibernate.Profiler.Messages.cs:line 560
   at Google.ProtocolBuffers.CodedOutputStream.WriteMessage(Int32 fieldNumber, IMessage value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 222
   at HibernatingRhinos.NHibernate.Profiler.Appender.Messages.LoggingEventMessage.WriteTo(CodedOutputStream output) in C:\NHProf\HibernatingRhinos.NHibernate.Profiler.Messages.cs:line 843
   at Google.ProtocolBuffers.CodedOutputStream.WriteMessage(Int32 fieldNumber, IMessage value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 222
   at HibernatingRhinos.NHibernate.Profiler.Appender.Messages.MessageWrapper.WriteTo(CodedOutputStream output) in C:\NHProf\HibernatingRhinos.NHibernate.Profiler.Messages.cs:line 1830
   at Google.ProtocolBuffers.CodedOutputStream.WriteMessage(Int32 fieldNumber, IMessage value) in C:\OSS\Buffy\ProtocolBuffers\CodedOutputStream.cs:line 222
   at Google.ProtocolBuffers.MessageStreamWriter`1.Write(T message) in C:\OSS\Buffy\ProtocolBuffers\MessageStreamWriter.cs:line 26
   at HibernatingRhinos.NHibernate.Profiler.Appender.NHibernateProfilerAppender.WriteToProfilerWithRetries(IEnumerable`1 wrappers, Int32 retryCount) in C:\NHProf\NHibernateProfilerAppender.cs:line 150

Ah ha! I thought, I know what is wrong, "select() is broken”. With that in mind, I looked at the relevant source file, and tried to figure out what the bug was. Here are the relevant sections:

image

image

Looking at that piece of code, it was very clear where the error was. Do you see it?

I won’t blame you if you don’t, the error is clear because it is not there. This is perfectly legal code, with no option for errors. Since we did get an error, it is clear that something is broken. And the problem is at a higher level. Usually, in such scenarios, it is a race condition that cause a violation of invariants. And that is the case here. Looking at the code, it was clear what the problem was, I was synchronizing access to the stream, but I wasn’t synchronizing access to the writer.

In this case, we actually got a real error, that pointed me very quickly to the real issue. In most scenarios, what actually happened was silent data corruption at the write end, which I interpreted as problems with the use of the unreliable network.

Again, yuck, and I feel really stupid.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}