Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 387 words

RavenDB stores JSON documents. Internally on disk, we actually store the values in BSON format. This works great, but there are occasions where users are storing large documents in RavenDB.

In those cases, we have found that compressing those documents can drastically reduce the on-disk size of the documents.

Before we go on, we have to explain what this is for. It isn’t actually disk space that we are trying to save, although that is a nice benefit. What we are actually trying to do is reduce the IO cost that we have when loading / saving documents. By compressing the documents before they hit the disk, we can save in valuable IO time (at the expense of using relatively bountiful CPU time). Reducing the amount of IO we use have a nice impact on performance, and it means that we can put more documents in our page cache without running out of room.

And yes, it does reduce the total disk size, but the major thing is the IO cost.

Note that we only support compression for documents, not for indexes. The reason for that is quite simple, for indexes, we are doing a lot of random reads, whereas with documents, we almost always go with the read/write the full thing.

Because of that, we would have needed to break the index apart to manageable chunks (and thus allow random reads), but that would pretty much ensure poor compression ratio. We run some tests, and it just wasn’t worth the effort.

A final thought, this feature is going to be available for RavenDB Enterprise only.

I am not showing any code because the only thing you need to do to get it to work is use:

<add key="Raven/ActiveBundles" value="Compression"/>

And everything works, just a little bit smaller on disk Smile.

time to read 1 min | 133 words

A common complaint that we hear about RavenDB 1.0 is that it depends on Newtonsoft.Json 4.0.8, while many libraries are already using 4.5.7. We already resolved the problem once and for all in the RavenDB 1.2 branch, but that is a few months from going live yet.

Therefor, we create a new nuget package: http://nuget.org/packages/RavenDB.Client/1.0.971

This nuget package is the exact same as 960, except that we compiled it against Newtonsoft.Json 4.5.7. Note that this is only supported for the client mode, if you want to run RavenDB Server or RavenDB Embedded, it is still going to require Newtonsoft.Json 4.0.8 in the 1.0 version.

The main idea is that you can get to run against RavenDB Server using Newtonsoft.Json 4.5.7 on the client side, which is the most common scenario for RavenDB.

time to read 1 min | 138 words

I mentioned a few times that we have been combing Europe recently, teaching RavenDB in a lot of courses.

The time is approaching for the same thing to happen in the states as well.

In this tour, we are also going to dedicate some time to discuss some of the really awesome features that we have upcoming in the next release of RavenDB. I talked about some of them in the blog, and more is upcoming, come see us and learn all about it….

time to read 4 min | 659 words

I promised that I’ll talk about the actual implementation details of how RavenDB deal with changes, after moving from SignalR to our own implementation.

First, let us examine the problem space. We need to be able to get notified by the server whenever something interesting happened. We don’t want to do active polling.

That leaves the following options:

  • TCP Connection
  • WebSockets
  • Long Polling
  • Streamed download

TCP Connections won’t work here. We are relying on HTTP for all things, and I like HTTP. It is easy to work with, there are great tools (thanks, Fiddler!) around to do that and you can debug/test/scale it without major hurdles. Writing you own TCP socket server is a lot of fun, but debugging why something went wrong is not.

WebSockets would have been a great options, but they aren’t widely available yet, and won’t work well without special servers, which I currently don’t have.

Long Polling is an option, but I don’t like it. It seems like a waste and I think we can do better.

Finally, we have the notion of a streamed download. This is basically the client downloading from the server, but instead of having the entire request download in one go, the server will send events whenever it has something.

Given our needs, this is the solution that we choose in the end.

How it works is a tiny bit complex, so let us see if I can explain with a picture. This is the Fiddler trace that you see when running a simple subscription test:

image

The very first thing that happens is that we make a request to /changes/events?id=CONNECTION_ID, the server is going to keep this connection open, and whenever it has something new to send to the client, it will use this connection. In order to get this to work, you have to make sure to turn off bufferring in IIS (HttpListener doesn’t do buffering) and when running in Silverlight, you have to disable read buffering. Once that is done, on the client side you need to read from the server in an async manner and raise events whenever you got a full response back.

For our purposes, we used new lines as response marker, so we would read from the stream until we got a new line, raise that event, and move on.

Now, HTTP connections are only good for one request/response. So we actually have a problem here, how do we configure this connection?

We use a separate request for that. Did you note that we have this “1/J0JP5” connection id? This is generated on the client (part an always incrementing number, part random) for each connection id. The first part is a sequential id that is used strict to help us debug things “1st request, 2nd request” are a log easier than J0JP5 or some guid.

We can then issue commands for this connection, in the sample above you can see those commands for watching a particular document and finally stopping altogether.

This is what the events connection looks like:

image

Each change will be a separate line.

Now, this isn’t everything, of course. We still have to deal with errors and network hiccups, we do that by aborting the events connection are retrying. On the server, we keep  track of connections and pending messages for connections, and if you reconnect within the timeout limit (a minute or so), you won’t miss any changes.

If this sounds like the way SignalR works, that is no accident. I think that SignalR is awesome software, and I copied much of the design ideas off of it.

time to read 2 min | 227 words

I mentioned yesterday that I am keeping the best for today. What I am going to show you is how you can use Eval Patching for keeping track of denormalized references.

In this case, we have Users & Posts. Each Post contains a UserName property as well as the user id. When the user changes his name, we need to update all of the relevant posts.

Here is how you can do this:

store.DatabaseCommands.UpdateByIndex("Posts/ByUser",
    new IndexQuery{Query = "UserId:" + userId},
    new AdvancedPatchRequest
        {
            Script = @"
var user = LoadDocument(this.UserId);
this.UserName = user.Name;
"
        });

And this is a really simple scenario, the options that this opens, the ability to load a separate document and modify the current document based on its value is really powerful.

time to read 8 min | 1539 words

I am not sure how to call this issue, except maddening. For a simple repro, you can check this github repository.

The story is quite simple, let us assume that you need to send a set of values from the server to the client. For example, they might be tick values, or updates, or anything of this sort.

You can do this by keeping an HTTP connection open and sending data periodically. This is a well known technique and it works quite well. Except in Silverlight, where it works, but only if you put the appropriate Thread.Sleep() in crucial places.

Here is an example of the server behavior:

var listener = new HttpListener
{
    Prefixes = {"http://+:8080/"}
};
listener.Start();
while (true)
{
    var ctx = listener.GetContext();
    using (var writer = new StreamWriter(ctx.Response.OutputStream))
    {
        writer.WriteLine("first\r\nsecond");
        writer.Flush();
    }
    Console.ReadKey();
}

In this case, note that we are explicitly flushing the response, then just wait. If you look at the actual network traffic, you can see that this will actually be sent, the connection will remain open, and we can actually send additional data as well.

But how do you consume such a thing in Silverlight?

var webRequest = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(new Uri("http://localhost:8080/"));
webRequest.AllowReadStreamBuffering = false;
webRequest.Method = "GET";

Task.Factory.FromAsync<WebResponse>(webRequest.BeginGetResponse, webRequest.EndGetResponse, null)
    .ContinueWith(task =>
    {
        var responseStream = task.Result.GetResponseStream();
        ReadAsync(responseStream);
    });

 

We start by making sure that we disable read buffering, then we get the response and start reading from it. The read method is a bit complex, because is has to deal with partial response, but it should still be fairly obvious what is going on:

byte[] buffer = new byte[128];
private int posInBuffer;
private void ReadAsync(Stream responseStream)
{
    Task.Factory.FromAsync<int>(
        (callback, o) => responseStream.BeginRead(buffer, posInBuffer, buffer.Length - posInBuffer, callback, o),
        responseStream.EndRead, null)
        .ContinueWith(task =>
        {
            var read = task.Result;
            if (read == 0) 
                throw new EndOfStreamException();
            // find \r\n in newly read range

            var startPos = 0;
            byte prev = 0;
            bool foundLines = false;
            for (int i = posInBuffer; i < posInBuffer + read; i++)
            {
                if (prev == '\r' && buffer[i] == '\n')
                {
                    foundLines = true;
                    // yeah, we found a line, let us give it to the users
                    var data = Encoding.UTF8.GetString(buffer, startPos, i - 1 - startPos);
                    startPos = i + 1;
                    Dispatcher.BeginInvoke(() =>
                    {
                        ServerResults.Text += data + Environment.NewLine;
                    });
                }
                prev = buffer[i];
            }
            posInBuffer += read;
            if (startPos >= posInBuffer) // read to end
            {
                posInBuffer = 0;
                return;
            }
            if (foundLines == false)
                return;

            // move remaining to the start of buffer, then reset
            Array.Copy(buffer, startPos, buffer, 0, posInBuffer - startPos);
            posInBuffer -= startPos;
        })
        .ContinueWith(task =>
        {
            if (task.IsFaulted)
                return;
            ReadAsync(responseStream);
        });
}

While I am sure that you could find bugs in this code, that isn’t the crucial point.

If we run the server, then run the SL client, we could see that we get just one lousy byte, and that is it. Now, reading about this, it appears that in some versions of some browsers, you need to send 4KB of data to get the connection going. But that isn’t what I have observed. I tried sending 4KB+ of data, and I still saw the exact same behavior, we got called for the first byte, and nothing else.

Eventually, I boiled it down to the following non working example:

writer.WriteLine("first");
writer.Flush();
writer.WriteLine("second");
writer.Flush();

Versus this working example:

writer.WriteLine("first");
writer.Flush();
Thread.Sleep(50);
writer.WriteLine("second");
writer.Flush();

Yes, you got it right, if I put the thread sleep in the server, I’ll get both values in the client. Without the Thread.Sleep, we get only the first byte. It seems like it isn’t an issue of size, but rather of time, and I am at an utter loss to explain what is going on.

Oh, and I am currently awake for 27 hours straight, most of them trying to figure out what the )(&#@!)(DASFPOJDA(FYQ@YREQPOIJFDQ#@R(AHFDS:OKJASPIFHDAPSYUDQ)(RE is going on.

time to read 6 min | 1025 words

Oh, wait, that is actually Eval Patching.

From the very start, RavenDB supported the ability to patch documents. To send a command to the server with some instructions about how to modify a document or a set of documents. For example, we have this:

documentStore.DatabaseCommands.Patch(
    "blogposts/1234",
     new[]
                    {
                        new PatchRequest
                            {
                                Type = PatchCommandType.Add,
                                Name = "Comments",
                                Value = RavenJObject.FromObject(comment)
                            }
                    });

This approach works, is easy to understand and support, and is quite simple to implement.

Unfortunately, it is limited. Users have all sort of very complex scenarios that they want to run that we aren’t really suitable for. For example, if a user wanted to move from a FirstName, LastName properties to FullName, this won’t give that to you.

Enter Matt Warren, who has contributed some really nice features to RavenDB (like facets), and who contributed the ability to do patching by sending a JavaScript function to the server.

Here is how it works using the new syntax:

store.DatabaseCommands.Patch("blogposts/1234",
    new AdvancedPatchRequest
    {
        Script = "this.Comments.push(newComment)",
        Values = {{"newComment", comment}}
    });

Note that you can send variables to the server and they are exposed to your script.

How about our previous example of moving from FirstName, LastName to FullName? Let us see:

store.DatabaseCommands.UpdateByIndex("Raven/DocumentsByEntityName",
 new IndexQuery{Query = "Tag:Users"},
 new AdvancedPatchRequest
    {
        Script = @"
this.FullName = this.FirstName + ' ' + this.LastName;
delete this.FirstName;
delete this.LastName;
"
    }
);

So we support full computation abilities during the patch Smile. So now you can just modify things pretty much as you feel like.

Here are a few other interesting things you can do.

Remove an item by value from an array:

store.DatabaseCommands.Patch("blogposts/1234",
    new AdvancedPatchRequest
    {
        Script = "this.Tags.Remove(tagToRemove)"
        Values = {{"tagToRemove", "Interesting"}}
    });

Remove an item using a condition:

store.DatabaseCommands.Patch("blogposts/1234",
    new AdvancedPatchRequest
    {
        Script = "this.Comments.RemoveWhere(function(comment) { return comment.Spam; });"
    });

This isn’t all, mind, but I’ll keep the really cool part for my next post.

time to read 3 min | 584 words

One of the major problems in .NET 4.0 async operation stuff is the fact that an unobserved exception will ruthlessly kill your application.

Let us look at an example:

image

On startup, check the server for any updates, without slowing down my system startup time. All well and good, as long as that server is reachable.

When it doesn’t, it will throw an exception, but not on the current thread, it will be thrown on another thread, and when the task is finalized, it will raise an UnobservedTaskException. Okay, so I’ll fix that and write code like this:

CheckForUpdatesAsync().ContinueWith(task=> GC.KeepAlive(task.Exception));

And that would almost work, except the implementation of CheckForUpdateAsync is:

private static Task CheckForUpdatesAsync()
{
    var webRequest = WebRequest.Create("http://myserver.com/update-check");
    webRequest.Method = "POST";
    return webRequest.GetRequestStreamAsync()
        .ContinueWith(task => task.Result.WriteAsync(CurrentVersion))
        .ContinueWith(task => webRequest.GetResponseAsync())
        .ContinueWith(task => new StreamReader(task.Result.GetResponseStream()).ReadToEnd())
        .ContinueWith(task =>
                          {
                              if (task.Result != "UpToDate")
                                  ShowUpdateDialogToUser();
                          });
}

Note the highlighted line, where we are essentially ignoring the failure to write to the server. That task is going to go away unobserved, the result, when GC happens, you’ll have an unobserved task exception.

This sort of error has all of the fun aspects of a good problem:

  • Only happen during errors
  • Async in nature
  • Bring down your application
  • Error location and error notification are completely divorced from one another

It is actually worse than having a memory leak!

This post explains some of the changes made with regards to unobserved exceptions in 4.5, and I wholeheartedly support this, but in 4.0, writing code that uses the TPL is easy and fun, but require careful code review to make sure that you aren’t leaking an unobserved exception.

time to read 9 min | 1776 words

Before I get to the entire story, a few things:

  • The SignalR team is amazingly helpful.
  • SignalR isn’t released, it is a 0.5 release.
    • Even so, the version that I was using was the very latest, not even the properly released 0.5 version.
  • My use cases are probably far out from what SignalR is set out to support.
  • A lot of the problems were actually my fault.

One of the features for 1.2 is the changes features, a way to subscribe to notifications from the databases, so you won’t have to poll for them. Obviously, this sounded like a good candidate for SingalR, so I set out to integrate SignalR into RavenDB.

Now, that ain’t as simple as it sounds.

  • SignalR relies on Newtonsoft.Json, which RavenDB also used to use. The problem with version compact meant that we ended up internalizing this dependency, so we have had to resolve this first.
  • RavenDB runs in IIS and as its own (HttpListener based) host. SignalR does the same, but makes assumptions about how it runs.
  • We need to minimize connection counts.
  • We need to support logic & filtering for events on both server side and client side.

The first two problems we solved by brute force. We internalized the SignalR codebase and converted its Netwonsoft.Json usage to the RavenDB’s internalize version. Then I wrote modified one of the SignalR hosts to allow us to integrate that with the way RavenDB works.

So far, that was relatively straightforward process. Then we had to write the integration parts. I posted about the external API yesterday.

My first attempt to write it was something like this:

    public class Notifications : PersistentConnection
    {
        public event EventHandler Disposed = delegate { }; 
        
        private HttpServer httpServer;
        private string theConnectionId;

        public void Send(ChangeNotification notification)
        {
            Connection.Send(theConnectionId, notification);
        }
        public override void Initialize(IDependencyResolver resolver)
        {
            httpServer = resolver.Resolve<HttpServer>();
            base.Initialize(resolver);
        }

        protected override System.Threading.Tasks.Task OnConnectedAsync(IRequest request, string connectionId)
        {
            this.theConnectionId = connectionId;
            var db = request.QueryString["database"];
            if(string.IsNullOrEmpty(db))
                throw new ArgumentException("The database query string element is mandatory");

            httpServer.RegisterConnection(db, this);

            return base.OnConnectedAsync(request, connectionId);
        }

        protected override System.Threading.Tasks.Task OnDisconnectAsync(string connectionId)
        {
            Disposed(this, EventArgs.Empty);
            return base.OnDisconnectAsync(connectionId);
        }
    }

This is the very first attempt. I then added the ability to add items of interest via the connection string, but that is the basic idea.

It worked, I was able to write the feature, and aside from some issues that I had grasping things, everything was wonderful. We had passing tests, and I moved on to the next step.

Except that…. sometimes…. those tests failed.  Once every so often, and that indicate a race condition.

It took a while to figure out what was going on, but basically, what happened was that sometimes, SignalR uses a long polling transport to send messages. Note the code above, we register for events as long as we are connected. In long polling system (and in general in persistent connections that may come & go), it is quite common to have periods of time where you aren’t actually connected.

The race condition would happen because of the following sequence of events:

  • Connected
  • Got message (long pooling, cause disconnect)
  • Disconnect
  • Message raised, client is not connected, message is gone
  • Connected
  • No messages for you

I want to emphasize that this particular issue is all me. I was the one misusing SignalR, and the behavior makes perfect sense.

SignalR actually contains a message bus abstraction exactly for those reasons. So I was supposed to use that. I know that now, but then I decided that I probably using the API at the wrong level, and moved to use hubs and groups.

In this way, you could connect to the hub, request to join to the group watching a particular document, and voila, we are done. That was the theory, at least. In practice, this was very frustrating.  The first major issue was that I just couldn’t get this thing to work.

The relevant code is:

    return temporaryConnection.Start()
                .ContinueWith(task =>
                {
                    task.AssertNotFailed();

                    hubConnection = temporaryConnection;
                    proxy = hubConnection.CreateProxy("Notifications");
                });

Note that I create the proxy after the connection has been established.

That turned out to be an issue, you have to create the proxy first, then call start. If you don’t, SignalR will look like it is working fine, but will ignore all hub calls. I had to trace really deep into the SignalR codebase to figure that one out.

In my opinion (already communicated to the team) is that if you start a hub without a proxy, that is probably an error and should throw.

Once we got that fix, things started to work, and the test run.

Most of the time, that is. Once in a while, the tests would fail. Again, the issue was a race condition. But I wasn’t doing anything wrong, I was using SignalR’s API in a way straight out of the docs. This turned out to be a probably race condition inside InProcessMessageBus, where because of multiple threads running, registering for a group inside SignalR isn’t visible on the next request.

That was extremely hard to debug.

Next, I decided to do away with hubs, by this time, I had a lot more understanding of the way SignalR worked, and I decided to go back to persistent connections, and simply implement the message dispatch in my code, rather than rely on SignalR groups.

That worked, great. The tests even passed more or less consistently.

The problem was that they also crashed the unit testing process, because of leaked exceptions. Here is one such case, in HubDispatcher.OnRecievedAsync():

 return resultTask
                .ContinueWith(_ => base.OnReceivedAsync(request, connectionId, data))
                .FastUnwrap();

Note that “_” parameter. This is a convention I use as well, to denote a parameter that I don’t care for). The problem here is that this parameter is a task, and if this task failed, you have a major problem, because on .NET 4.0, this will crash your system. In 4.5, that is fine and can be safely ignored, but RavenDB runs on 4.0.

So I found those places and I fixed them.

And then we run into hangs. Specifically, we had issues with disposing of connections, and sometimes of not disposing them, and…

That was the point when I cut it.

I like the SignalR model, and most of the codebase is really good. But it is just not in the right shape for what I needed. By this time, I already have a pretty good idea about how SignalR operates, and it was a work of a few hours to get it working without SignalR. RavenDB now sports a streamed endpoint that you can register yourself to, and we have a side channel that you can use to send commands on to the server. It might not be as elegant, but it is simpler by a few orders of magnitude, and once we figure that out, we have a full blown working system at our hands.  All the test passes, we have no crashes, yeah!

I will post exactly on how we did it in a future post.

time to read 2 min | 358 words

This was a really hard feature. I’ll discuss exactly why and how in my next post, but for now, I pretty much want to gloat about this. We now have the ability to subscribe to events from the server on the client.

This opens up some really nice stories for complex apps, but for now, I want to show you what it does:

 store.Changes()
      .DocumentSubscription("orders/1293")
      .Subscribe(Reload);

You can add subscribe to multiple documents (or even all documents) and you can also subscribe to changes in indexes as well.

Why is this an awesome feature? It opens up a lot of interesting stories.

For example, let us assume that the user is currently editing an order. You can use this feature to detect, at almost no cost, when someone have changed that order, saving the user frustration when / if he tries to save his changes and get a concurrency exception.

You can also use this to subscribe to a particular index and update in memory caches on update, so the data is kept in memory, and you don’t have to worry about your cache being stale, because you’ll be notified when it does and can act on this.

You can even use this to watch for documents of a particular type coming in and do something about that. For example, you might setup a subscription for all alerts, and whenever any part of the system writes a new alert, you will show that to the user.

The last one, by the way, is a planned feature for RavenDB Studio itself. As well as a few others that I’ll keep hidden for now Smile.

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 16 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}