Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 8 min | 1473 words

imageSeveral months ago we decided to ramp up the RavenDB on Linux migration effort, and hired a full time developer to do just that.

We started this with great hopes, mostly because we were able to get Voron to run on Linux in a reasonable amount of time. But RavenDB is several orders of magnitude bigger than Voron, and we run into a lot more complexities along the way.

In particular, and I am not quite sure how to put it nicely, the entire environment is pretty unstable. Our target was Mono – 4.3.0, MonoDevelop 5.10 and Ubuntu 14.04. And it takes really no effort at all to break pretty much everything there.

For example, SLEEP_DURATION_BEFORE_ABORT will, if you are running inside a debugger, or just running GC, sometimes, it will intentionally crash if certain operations takes longer than 200ms.

But in general, it feels like Mono just isn’t nearly stable enough for a production platform. Sometimes it would work fine, other time, you get horrible crashes in the process that required us to debug the mono runtime to figure out what is going on. Sometimes it was us doing stupid things, other times it was real bugs in the Mono runtime. Other issues related to missing websockets implementation (it appears to be there and then removed, for some reason), missing chunked encoding support etc.

As an aside, MonoDevelop in particular is… quite uncomfortable IDE, and it doesn’t compare well in pretty much any level of experience to the experience you get from other IDEs. That just exacerbate the problem, to be frank.

In other words, porting RavenDB to mono is a lot of hard work. But that was pretty much expected. What I didn’t really expect was how much work it would be not to port it, but to actually fill in missing / incomplete parts of the framework itself.

Now, there are plenty of other problems in there even without Mono. Typically Windows –> Linux port, anything from file paths to case sensitivity to finding different ways to do various things (from finding how busy the CPU is to getting low memory notifications to … well, you get the drift). Those are the kind of problems that we expected, and the kinds that frankly, we wanted to be solving.

But the instability of the mono runtime environment make it really expensive to port any non trivial software without spending most of the time debugging Mono. Even if the error is in our code, the only way to verify that is to actually debug Mono itself. And that puts a much higher cost on the actual porting effort. More to the point, this also means frightening things for trying to support this in production. I don’t relish the thought of having to go to a customer and tell them that a particular issue is caused by a GC bug and that fixing that will require a new custom runtime. Leaving aside the actual cost of such a support call.

So we stopped, and looked at the CoreCLR. I have a much higher expectation of quality from Microsoft, given the track record of the .NET framework. The problem is that the CoreCLR, while it is supposed to have an RC out in November, it still quite problematic on Linux. For example, you can’t use Unix paths in Uris, which broke us pretty early in the process. That was annoying, but expected for the time being, and while this issue (and other stuff we run into, I’m only pointing this out as a simple example, nitpickers, don’t get hang up on this) are surmountable, the major issue is that the CoreCLR require a lot more than just a few #ifdef, it require quite a lot of work. Probably restructuring the project (some dependencies are now nuget packages, different structure for projects and build system, etc).

Therefor, what we intend to do now is wait a bit until the CoreCLR next release, probably September, and then start seeing what it takes to run RavenDB (both server & client) under the CoreCLR on Windows. The idea is that we have much better tooling for working with .NET code on Windows, and hopefully the bulk of the work to actually run on the CoreCLR runtime there, then just deal with everything we need to run on Linux, and not have to worry about debugging the runtime as well.

Now, to deal with the nitpickers:

Mono is Open Source, you can fix any issues you find and submit pull requests.

That is correct, but let me talk about a few of the issues we have run into.

The class library is partial / bad.  I think that we would have done much better if we were building on Mono from the get go, we would know what pieces to avoid. But when a call to a ZipFile.Open will crash with SIGSEGV and no indication of why that is happening (ended up being stack overflow in the Mono BCL implementation), the fix was a single line but finding the root cause took a long while.

With the actual runtime, a GC issue caused a segmentation fault, that took about a month to figure out.

Now, we aren’t expert in Mono. So when we got in trouble  we reached out to the members in the Mono team, and were recommended a company that had the required expertise. I’m very happy with the service provided by them, but the fact of the matter is, it took a month to narrow down a big problem to something that would be fixed by a couple of lines of code. Not because they weren’t good, but because the problem was pretty tough to figure out.

So we contributed some stuff back, and we would be happy to continue doing so if the process wasn’t so hard (and quite expensive). Even relative simple issues had a very high threshold.

The RavenDB codebase is close to a million lines of code. It is doing some pretty advanced stuff, and changing the foundation underneath it means that we invalidate a lot of silent assumptions. If the trivial stuff breaks so badly, I don’t want to think about the cost of the really complex stuff. The GC issue was bad enough.

If we were working from scratch, we would know what to avoid, but with such a big codebase, that isn’t feasible. And again, even if we did manage to get it working properly, we still have the issue of how to support it. We provide production support to our customers, and the ability to accurately and quickly pinpoint problems and troubleshoot them is key. Taking on the support burden on Mono is very risky.

Note that the worry I have isn’t based around vague fears. The GC issue we had would cause random crashes, typically far after the relevant code would run, and only under very specific scenarios. In 99% of the cases, it would run just fine (well, not fine, but it wouldn’t be crashing). A production system with this type of behavior would be a nightmare, even escalating to the Mono core team, figuring out where and how this is happening is nasty, and it won’t happen in the time frames that we want to provide to customers who have production support contracts with us.

What about the CoreCLR, it is Open Source too, why not contribute to it?

The current plan, as I said, is to first see what it takes to move to the CoreCLR on Windows. That is a much smaller step, but it will let us get familiar both with the way the CoreCLR is structured and the new dependencies. We are actively looking at the project, and to say that I’m excited is not nearly enough. This require a lot of work on our side, before we get to the point where we are actually getting to figure out if there is stuff that needs fixing there too.

Once we have that, we are going to go back to Linux and get to working on running on a different OS, with all the implications of that. I’m actually looking forward to this. That is the kind of problem that I want to tackle.

If you want to look at the current state of our work (RavenDB on Linux using Mono), you can check it out here. We still have a full time developer for this, we are just going to divert him until the next beta of CoreCLR is out, and then we are going to restart the process as I described above.

time to read 3 min | 496 words

A customer gave us a call about a failure they were experiencing in their production environment. They didn’t install the license that they purchased for some reason, and when they tried to install that, RavenDB will not run.

There is what this looked like:

image

Once I had all those details, it was pretty easy to figure out what was going on. I asked the client to send me the Raven.Server.exe.config file, just to verify it, and sure enough, here are the problematic lines:

<add key="Raven/AnonymousAccess" value="Admin"/>
<add key="Raven/Licensing/AllowAdminAnonymousAccessForCommercialUse" value="false" />

This is the default configuration, and this failure is actually the expected and desired behavior.

What is going on? This customer was running RavenDB in a development mode, without a license. That means that the server is open to all. When you install a license, that is a pretty strong indication that you are using RavenDB in production. It is actually common to see users installing the development mode in production, and registering the license at a later date, for various reasons.

The problem with that is that this means that at least for a while, they were running with “everyone is admin” mode, which is great for development, but horrible for production. If you install RavenDB for production usage (by providing a license during the setup process), it will set itself up in locked down mode, so only users explicitly granted access can get to it. But if you started at development installation, then added the license…

It is common for customers to forget or actually be unaware of that setting. And not setting it is going to end up with a production installation that is open to the whole wide world.

When that happens, that tend to be… bad. Over 30,000 instances that are wide open bad.

Because of that, if you are running a license, and you had previously installed the development mode, you need to make a choice. Either you setup anonymous access so only authorized people can access the database, or you explicitly decided to grant everyone access, likely because you are already running in secured environment.

Error reporting from services is a bit hard, because there is no good way to send error messages to the service managers. But in the event log, we can see the actual error with the full details.

image

time to read 2 min | 388 words

I just finished writing the following code, the time is 2 AM local time, and I am chasing a conceptual issue deep in the guts of Voron (RavenDB’s internal storage system).

image

By any measure of the word, this code should frighten anyone who reads it. For what I think can we can all safely assume are self evident reasons.

So why am I so very happy about this kind of code? Because the output of this code is this:

image

This is pretty trivial tree view for internal data structures inside of Voron.

But the point here isn’t that this code is probably going to be a no-go zone by tomorrow morning. This is code explicitly designed for that. In fact, the calling method for this is:

image

So this is a piece of code that we are trying very hard to ensure that will never be called unless we are actively requesting it in a debug session. In other words, this isn’t production code. It is scaffolding code. The code that you’ll call when you lost your way, and can no longer figure out what is going on using the standard tools.

This kind of visualization is cheap & dirty trick, but it means that instead of having to reason about this kind of thing, I can immediately see what is going on.

For that matter, if you look closely at the visualization, you might be able to see the bug in question. It is pretty obvious, once we are able to look at the entire data set in one go. The more complex the algorithm, the harder it is to conceptualize it. But in most cases, if we could but look at the data in a way that would make sense to us, we will be able to immediately see what we did wrong.

time to read 4 min | 727 words

The background for this post:

Recently I got an email from a startup founder about a service that they are offering. It so happened that this service matched something that I was actually considering doing, so I was very happy to try it out. I registered, and on two separate occasions I attempted to use the service for its intended purpose. I wasn’t able to do that. I got some errors, and I’m not sure if it was data validation or plain errors.

After getting in touch with the founder in question, he pointed me to the guide which (presumably) had step by step instructions on how to get things working. I commented that this shouldn’t be this hard and got a reply that this is just the way it is, anywhere, not just with this service. It was at that point that I gave up on the service entirely.

A few things to note, this is a web based service (and I intentionally not revealing which one) that is supposed to allow collaboration in one –> many scenarios. I was trying to use it as a publisher.

This experience jarred me, for a very simple reason. It shouldn’t be this hard. And sending a user that just want to get their feet wet to the guide is a really bad mistake. You can see the funnel on the right, and anyone that is familiar with customer acquisition will recognize the problem.

Basically, the process of getting a new user begins when they are just looking at your software, and you want to lead them in without any jarring. Any hurdle in their path is going to cause some of them to leave, probably never to return. So you want to make damn sure that you have as little friction to the process as you can.

In this case, I was trying to use a service for the purpose it was intended to. And I was just bogged down with a lot of details that had to be completed before I could even test out the service.

In days of old, games used to come with a guide that told you about how to actually play the game. I remember going over the Red Alert and Diablo guides with great excitement while the games were installing.

Then the games makers noticed that no one was reading those guides, and new gamers run into problems playing the games even though they were clearly documented in the guide.

The solution was to stop using guides. Instead, the games started incorporating several initial levels as tutorial, to make sure that gamers actually learned all about the game while playing the game.

It is a great way to reduce the friction of playing, and it ensured a smooth transition from no idea how to play the game to ready to spend a lot of time blowing pixels up.

Taking this back to the realm of non gaming software, you really have to identify a few core paths that users are likely to walk down into when using your software, and then you are going to stream line pretty much everything along their path to make sure that they don’t hit any friction.

It means asking the user to do as little work as possible, choosing defaults for them (until they care enough to change those), having dedicated UI to lead them to the “Wow, this is amazing!” moments. Only after you actually gained enough positive experience with the users can you actually require them to do some work.

And note that by do some work, I’m talking about anything from setting up the logo for the publisher to selecting what categories the data will go on. By work, I’m talking about anything that is holding up the user from doing the major thing that they are on their site for.

If you don’t do that, then you are going to end up with narrower funnel, and the end result is that you’ll have fewer users. Not because you service is inferior, or your software sucks. Simply because you failed to prove to the user that you are actually worth the time investment in give you a fair shoot.

time to read 2 min | 321 words

In response for my previous post, Eric had the folowing comment (well, tweet):

I guess some baskets last longer or some eggs don't seem to rot e.g. C, C++, SQL, Java*, etc

And that is true, in some sense of the word. In other words, there isn't any expected shortage of C or C++ opportunities anywhere in the medium to long future. The problem is that this isn't the same language, framework or enviornment over time.

In the late 90s / early 2000s I was deep into C++. I read Effective C++ and More Effective C++, I gone through the entire STL with a fine tooth comb, and I was a pretty enthusiastic (and bad) C++ developer. But let assume that I was a compotent C++ dev in the late 90s.

What was the environment like at the time? Pretty much all 32 bits, STL was still a hotly debated topic. MFC and ATL were all the rage, and making the C++ compiler die via template meta programming was extremely common. COM and Windows DNA were all the rage. 

Assume that you freeze the knoweldge at that time, and skip forward 15 years. Where are you at?

Modern C++ has embraced STL, then moved beyond it to Boost. In Windows land, MFC and ATL are only used for legacy stuff. COM is still there, but you try to avoid it. And cross platform code isn't something esoteric.

Now, I stopped doing C++ a few years after getting starting with .NET, so I'm pretty sure that the kind of changes that I can see are just the tip of the iceberg.

In short, just because the title of your job didn't change doesn't mean that what you did hasn't changed considerablely. And choosing safe (from a job prospect) programming language and sticking to it, knowing that you can always rely on that is a pretty good way to perfom career suicide.

On the other hand, I know people looking for Cobol programmers...

time to read 4 min | 785 words

This one is a pretty recent one. A customer complained about high memory usage in RavenDB under moderate usage. That was a cause for concern, since we care a lot about our memory utilization.

So we started investigating that, and it turned out that we were wrong, the problem wasn’t with RavenDB, it was with the RavenDB Client Library. The customer had a scenario where 100% of the time, after issuing a small number of requests (less than ten), the client process would be using hundreds of MB, for really no purpose at all. The client already turned off caching, profiling and pretty much anything else that both they and us could think of.

We got a process dump from them and looked at that, and everything seemed to be fine. The size of the heap was good, and there didn’t appear to be any memory being leaked. Our assumption at that point was that there is some sort of native memory leak from their application.

To continue the investigation further, NDAs was required, but we managed to go through that and we finally had a small repro that we could look at ourselves. The fact that the customer was able to create such a thing is really appreciated, because very often we have to work with a lot of missing information. Of course, when we run this on our own system, everything was just fine & dandy. There was no issue. We got back to the customer and they told us that the problem would only reproduce in IIS.

And indeed, once we have tested inside IIS (vs. using IIS Express), that problem was immediately obvious. That was encouraging (at least from my perspective). RavenDB doesn’t contain anything that would differentiate between IIS and IIS Express. So if the problem was only in IIS, that is not likely to be something that we did.

Once I had a repro of the problem, I sat down to observe what was actually going on. The customer was reading 1,024 documents from the server, and then sending them to the browser. That meant that each of the requests we tested was roughly 5MB in size. Note that this means that we have to:

  • Read the response from the server
  • Decompress the data
  • Materialize it to objects
  • Serialize the data back to send it over the network

That is a lot of memory that is being used here. And we never run into any such issues before.

I had a hunch, and I created the following:

[RoutePrefix("api/gc")]
public class GcController : ApiController
{
    [Route]
    public HttpResponseMessage Get()
    {
        GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
        GC.Collect(2);
        return Request.CreateResponse(HttpStatusCode.OK);
    }
}

This allows me to manually invoke a rude GC, and indeed, when running this, memory utilization dropped quite nicely. Of course, that isn’t something that you want to run into your systems, but it is very important diagnostic tool.

Next, I tried to do the following:

[Route]
public HttpResponseMessage Get()
{
    var file = File.ReadAllText(@"D:\5mb.json");
    var deserializeObject = JsonConvert.DeserializeObject<Item>(file);

    return this.Request.CreateResponse(HttpStatusCode.OK, new { deserializeObject.item, deserializeObject.children });
}

This involves no RavenDB code, but it has roughly the same memory pressure semantics as the customer code. Indeed, the problem reproduced quite nicely there as well.

So the issue is about a request that uses a lot of memory (include at least one big buffer), likely causing some fragmentation in the heap that would bring memory utilization high. When the system had had enough, it would reclaim all of that, but unless there is active memory pressure, I’m guessing that it would rather leave it like that until it has to pay that price.

time to read 3 min | 576 words

On Sunday, there was a news program about how tough it is to find work after 40s. It was full of the usual stuff about employers only looking for young people who can work 30 hours days*, and freezing out anyone too old for their taste, etc.

This is a real problem in many cases, and one that I find abhorrent. Not the least because I plan to have a long career in my chosen field, and I really don’t like the idea of having a certain age after which I should be shuffled off to do data entry tasks, if that. Especially since that age seems not too far at all. Currently, the oldest person we have in a development role is over fifty, although most of our team is late twenties to mid thirties.

So we reached out to one of them, asking to get a CV so we can look at that. And it took me very little time to realize why this person had a hard time finding a job. In particular, while the news program was about people who are unable to find a job, this particular person actually had a job. It just wasn’t a job that he was happy with. As far as I understand, he was paid a lot less than what he was used to, comparable to someone just starting out, rather than someone with close to two decades of experience.

Looking at the CV, it was obvious why that was. This particular person job history included a 15 years stretch in a large municipality. During which he worked mostly on VB6 programs. It has only been in the last couple of years that he started working in .NET.

Now, Microsoft released VB6 in 1998. And announced that it is moving to VB.Net in early 2000, with the .NET framework being released on 2002. By 2005, VB6 was no longer supported, and by 2008 even extended support run out. So we are talking about 7 – 8 years in which the main tool at their disposal was quite clearly a dead end.

While I’ve fond memories of VB6, and I’m pretty sure that there is still a lot of software using it. It is not surprising that demand for people with VB6 expertise has already peaked and is currently in a decline that I don’t really see changing. Note that this isn’t really surprising, and you would have to willfully ignore reality to believe that there is a strong future in VB6 in the past decade.

So we have a person with expertise in obsolete tech, trying to find a job in the market with effectively 1-2 years of experience using C#. It isn’t surprising that he got what is effectively a starter position, even given his age.  It isn’t that his age affected the offered position, it is that it didn’t.

This lead back to the advice I gave previously on the matter of career planning. Saying “I got a nice job” and resting on the laurels is a good way to end up in a marginalized position down the road.

Keeping your skills up to date (ideally as part of your job, but outside of it is if isn’t possible) is crucial, otherwise you are the guy with one year of actual experience, repeated many times over.

* Not a typo, it is intentionally stupid.

time to read 3 min | 452 words

In my previous post, I explained an API design that give the user the option to perform an immediate operation, use the default fire and forget or use an explicit bulk mechanism. The idea is that most operations are small, and that the cost of actually going over the network is going to dominate the cost of the entire operation. In this case, we want to give the user the option of selecting the “I want to know the operation completed” or “I just want to try the best, I’m fine if there is a failure” modes.

Eli asks:

Trying to understand why this isn't just a scripting case. In your example you loop over CSV and propose an increment call per part which gets batched up and committed outside the user's control. Why not define a JavaScript on the server that takes an array ("a batch") of parts sent over the wire and iterates over it on the server instead? This way the user gets fine grained control over the batch. I'm guessing the answer has something to do with your distributed counter architecture...

If I understand Eli properly, the idea is that we’ll just expose an API like this:

Increment(“users/1/visits”, 1);

And provide an endpoint where a user can POST a JavaScript code that will call this. The idea is that the user will be able to decide whatever to call this once, or send a whole batch of updates in one go.

This is certainly an option, but in my considered opinion, it is a pretty bad one. It has nothing to do with the distributed architecture, it has to do with the burden we put on the user. The actual semantics of “go all the way to the server and confirm the operation” vs “let us do a bulk insert kind of thing” are pretty easy. Each of them has a pretty well defined behavior.

But what happens when you want to do an operation per page view? From the point of view of your code, you are making a single operation (incrementing the counters for a particular entity). From the point of view of the system as a whole, you are generating a whole lot of individual requests that would be much better off as a single bulk request.

Having a scripting endpoint gives the user the option of doing that, sure, but then they need to handle:

  • Error recovery
  • Multi threading
  • Flushing on batch size / time
  • Failover

And many more that I’m probably forgetting. By providing the users with the option of making an informed choice about speed vs. safety, we avoid putting the onus of the actual implementation on them.

time to read 3 min | 552 words

In RavenDB 4.0 (yes, that is quite a bit away), we are working on additional data types and storage engines. One of the things that we’ll add, for example, is the notion of gossiping distributed counters. That doesn’t actually matter for our purposes here, however.

What I wanted to talk about today is the problem in making small updates over a network. Consider the counters example. By far the most common operations are some variant of:

counters.Increment(name, 1);

Since the database is remote, we need to send that over the network. But that is a very small operation. Going all the way to the remote database just for that is a big waste of time. Consider that in most systems, we don’t have just a single thread of execution, we have many. Each of them performing their own operations. Allowing each operation to go on its own is a big waste, but what other options can we offer?

There are other considerations as well. While most of the time we’ll be making small operations, it is very common to need to do bulk operations as well. Maybe you are setting up the system for the first time, or importing data from a file, etc. Making large number of individual requests will kill any hope of doing this fast. The obvious answer is to use batching. send a lot of values to the server all at once. This reduce the network overhead significantly.

The API for that would be something like:

using(var batch = counters.Advanced.NewBatch())
{
	foreach(var line in File.EnumerateAllLines("counters.csv"))
	{
		var parts = line.Split();
		batch.Increment(parts[0], long.Parse(line[1]));
	}
}

So that is great for big things, but not very helpful for individual operations. We still have to go to the server for each of those.

Of course, we could also use the batch API, but making use of that for an individual operation is… a big waste.

What to do?

The end result we arrived at was that we have three levels of API:

  • counters.Increment(name, 1); – single operation, executed immediately over the network. Guaranteed to succeed or fail and give you the result.
  • counters.Advanced.NewBatch() – batch operation, executed over all of the items in the batch (but not as a single transaction), will let you know if the whole operation succeeded, or if there was an issue with something.
  • counters.Batch.Increment() – the default batch, thread safe, can be utilized by individual requests. This is a fire & forget operation. We increment, and behind the scene we’ll merge all the increment from all the threads and send them in batches to the server.

Note that the last option means that we’ll only do batches, so only when enough time has lapsed or we have enough items to send will we send the data to the server. The idea is that you get to choose, based on the importance of what you are doing.

If you need confirmation that something was successful, use the individual operation. If you just want us to make best effort, and if something bad really happened you don’t care about it, use the batch option.

Note that I’m using the counters example here because it is simple, but the same applies for things like time series, and other stuff that we are currently building.

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 15 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}