Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 4 min | 601 words


No, I'm not abandoning NHibernate to greener pastors, I'm talking about this post.
I'm not sure what the point of the author was, but on first reading, it read like FUD to me. Maybe it is the general tone of the post, or that there are a lot of numbers without any clue how the author got them, but that was my first feeling. A lot of the points that are raised are valid, but I just don't understand where the numbers came from.
I had a math teacher that used to look at a student excersize and say: "And suddenly there is a QED." when we did something stupid. I fill like saying it here.

Take for instance the part about entities and methods. I present you my most common method of dealing with O/RM:

public interface IRepository<T>
{
    void Save(T item);
    void Delete(T item);
    ICollection<T> Find(params Predicate[] preds);
    T Load(int id);
    T TryLoad(int id);
}

That is it. The implementation is generic as well, so I go five methods, not matter how many entities I have. In theory I can switch from O/RM to DAO to SOA to AOM (Acronym Of Month) with very little effort. I would replace the implementation of the IRepository<T>, and I'm done. Five methods, about 2.5 hours if I understand the math correctly, we'll add a couple of days to be safe, but we are still under 20 hours.  I don't even need to touch the UI since it works against the interface, and not the implementation.
The problem is that this math is that it is based on nothing in particular. I recently had to evaluate moving from one O/RM to another, and I had to survey the number of places where I was dependant on the O/RM. I counted 23 major features that I had to have if I didn't want to basically re-write my application from scratch.
Those ranged from lazy loading to cascades to reference equality for same entities in the same session, etc.
Replacing the O/RM would've have been hard at best, impossible at worst, and painful whatever I would have done. And I used a very similar interface to isolate myself from the O/RM (even though I never had any intention of replacing it).

Frankly, the issue is not the support of interface, but far smaller interactions. From knowing that you can map the same fields in a class to different properties in a hierarchy (an integer in one case, an object in the other, etc) is very powerful tool that will cause quite a bit of pain if you need to stop using it. That is where the cost of switching O/RM will come into play.
If you are doing anything at all interesting with your O/RM of choice (or hand written DAL), then you are going to get into some serious trouble if you think about switching.

For instance, I'm using NHibernate's Table Hierarchy to save strategies to the database, so when I load the object later, I'll get all its saved strategies with their parameters. It is making my life so much easier, but I don't want to even think what I would have to do if I had to write it all on my own.
To conclude, I agree with the first two observations, and disagree with the third. You can certainly optimize a good O/RM as much as you like. Usually you don't even need to see the O/RM code for this.
I really don't agree with his conclustions, though.
time to read 2 min | 376 words


One of my tasks when I was in the army was to train people. It was often new guys that arrived, but also refreshing the knowledge and the skill of our existing staff. One Wendsday afternoon I got an urgent call from headquarters, I was to drop everything and go to another base, to train several cadets in the final stages of their training.
I was literally yanked away in a matter of a few hours. The course that I needed to pass was supposed to be two to three weeks long, and existed mainly as a set of power point presentations that were written by someone who never worked in the field.
I took a brief look at the material that I had and flat out refused to pass the course this way. It was, in fact, the same material that I was taught, and it hasn't changed a bit in over two years (and a lot has changed since). So I just played the course by the ear, without much sleep, but with a real devotion to make sure that the officer that I will train will go into the field as prepared as I can make them in a class room and in field trips.
I remember several times when I had to tell the cadets to wait for several minutes while I was finishing writing their next lesson, and then immediately giving it to them. Needless to say, in the breaks between lessons, I kept writing the next lessons. It was also one of the times that I had the most fun in the army (sadly, it wasn't even close to one of the times that I had least sleep).
I had a similar experiance today, writing a presentation and the lesson's plan in the morning, doing a short trial run on some co-workers, and then passing it later that day. In fact, I wrote the lesson for tomorrow while the students were doing excersizes.


Oh, and one point that I am particuallry proud of, the presentations and lessons that I created for the out of the blue course were used in the army for several years afterward (actually, to my knowledge, they are still in use today).
time to read 1 min | 59 words

If you want to use NHibernate with .Net 2.0 Nullable Types (int?, DateTime?, etc), all you need to do is to specify full type name of the (non nullable) property.
So, if you've something like "DateTime? DueDate { get; set; }", the mapping will looks something like this: "<propery name='DueDate' type='System.DateTime'/>"
You don't need to do anything else.

time to read 2 min | 336 words

I just watched The Witch, The Lion and The Wardrobe, the mini seria from the 80s.
It is far more primitive, but it has a certain charm.
The children actually looks like real children. When they needed to show a pegasus, they actually used animation.
The England that is shown is more in line with my excpectations of the time (a map of the war's progress on the wall, for instance), and the English accent is a delight.
I think I may have seen this as a child, there are some things that rings a bell, visually. but it is hard to know since I seen the new movie and read the books several times.
The White Witch looks far more like I thought it would be from the description in the book, and not like something out of a Muscle Woman magazine.
I really liked the proffesor's attidute in this movie.

The annimation is not on the level of Who Killed Roger Rabbit, but it leaves enough for imagination to fill, so I don't have a problem with this.
I just saw a unicorn in this movie that looked more real than any other. Aslan itself is both a surprise and a disappointment. It looks more or less like I imagines, a big lion, but his roar is not something that would frighten a kitten.
The White Witch is really a caricature, I think. Considerring the way she is portrayed in the book, I'm not surprised by how she is acting. It's all in line with the character, the problem is that the character is not believable.
First time I have ever seen a flying lion anywhere, but it is in this movie. Doesn't look very real, but it gives them a chance to show some country side, I guess.
The battle scene, which I really like in the recent movie, is the part that really can't compare, it is laughably bad in all regards.
The end was better, I think.

time to read 1 min | 158 words

I just finished watching DNR TV #16, in which Carl is talking about async programming.
Some of the best DotNetRocks episode was with Carl and the host of the moment just goofing around and talking tech, so I had high expectations of this.
I'm not a VB programmer, and it's interesting to see how a VB guy works, and it even includes commentry.

One thing that I don't understand is this construct:

Try
    ' do something that can throw
Catch
    Throw
End Try

Isn't this a no op?

This video reminds me of a Joel On Software article about how hard it is to write a function that copies a file.
There is a lot of commentry on some ancient VB stuff along the way, since Carl often compare the way it was done right now and in VB6 and previous versions.
It is a nice show.

time to read 3 min | 538 words

I just finished a really nice project that involved heavy use of multi threaded code. The project is enterprisey (which has turned into a really bad word recently, thanks to Josling and The Daily WTF), so I needed to make sure that I was handling several failure cases appropriately.

Now, testing multi threaded code is hard. It is hard since you run into problems where the code under test and the test itself are running on seperate threads, and you need to add a way to tell the test that a unit of work was completed, so it could verify it. Worse, an unhandled exception in a thread can get lost, so you may get false positive on your tests.

In this case, I used events to signal to the tests that something happened, even though the code itself didn't need them at the beginning (and didn't need many of them at all in the end). Then it was fun making sure that none of the tests was going to complete before the code under test ( there has got to be  a technical term for this, CUT just doesn't do it for me ) has completed, or that there were no false positives.

As I was developing the application, I run into issues that broke the tests, sometimes it was plainly bugs in the thread safety of the tests, and I could see that the code is doing what it should, but the test is not handling it correctly. I was really getting annoyed with making sure that the tests run correctly in all scenarios.

In the end, though, I was just about done, and I run the tests one last time, and they failed. The error pointed to a threading problem in a test. It was accessing a disposed resource, and I couldn't quite figure why. Some time later, I found out that an event that was supposed to be fired once was firing twice. This one was a real bug in the code, which was caught (sometimes it passed, the timing had to be just wrong for it to fail) by the test.

Actually, this was a sort of an accident, since I never thought that this event could fire twice, and I didn't write a test to verify that it fired just once. (In this case, it was the WorkCompleted event, which was my signal to end the test, so I'm not sure how I could test it, but never mind that.)

I spent some non trivial time in writing the tests, but they allowed me to work a rough cut of the functionality, and then refine it, knowing that it is working correctly. That was how I managed to move safely from the delete each row by itself to the BulkDeleter that I blogged about earlier, or to tune the consumers, etc. That final bug was just a bonus, like a way to show me that the way I used was correct.

Now, the tests didn't show me that if I try to shove a 30,000 times more data than it expected, the program is going to fail, that was something that load testing discovered. But the tests will allow me to fix this without fear of breaking something else.

time to read 8 min | 1522 words

I wrote a Windows Service and I couldn't get the service to start properly. After trying too long to debug it in a service mode, I gave up and tried running it as a console application, I immediately got the following error:

Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

I so thought that I left those kind of bugs when I left C++. Just to be clear, there isn't a hint of unsafe / unmanaged code in the application. It turned out that my initialization was throwing an exception, and because my error handling was calling Stop(). This seems to be the trigger.

I repreduce the error with this code:

class Program

{

    static void Main (string[] args)

    {

        new TestSrv().SimulateStart();

    }

 

    public class TestSrv : ServiceBase

    {

 

        public void SimulateStart()

        {

            this.OnStart(null);

        }

 

        protected override void OnStart(string[] args)

        {

            try

            {

                throw new Exception();

            }

            catch (Exception e)

            {

                //log exception

                this.Stop();

            }

        }

    }

}

The fix was not to call Stop() from OnStart(), but to use Environment.Exit(-1), which tells the Services Manager that we exited with errors.

I filed a ladybug here.

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 15 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}