Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 541 words

One of the distinguishing feature of RavenDB is its ability to process large aggregations very quickly. You can ask questions on very large data sets and get the results in milliseconds. This is interesting, because RavenDB isn’t an OLAP database and the kind of questions that we ask can be quite complex.

For example, we have the Products/Recommendations index, which allow us to ask:

For any particular product, find me how many times it was sold, what other products were sold with it and in what frequency.

The index to manage this is here:

The way it works, we map the orders and have a projection for each product, and then we add the other products that were sold with the current one. In the reduce, we group by the product and aggregate the related products together.

But I’m not here to talk about the recommendation engine. I wanted to explain how RavenDB process such indexes. All the information that I’m talking about can be seen in the Map/Reduce visualizer in the RavenDB Studio.

Here is a single entry for this index. You can see that products/11-A was sold 544 times and 108 times with products/69-A.

image

Because of the way RavenDB process Map/Reduce indexes, when we query, we run over the already precomputed results and there is very little computation cost at querying time.

Let see how RavenDB builds the index. Here is a single order, where three products were sold. You can see that each of them as a very interesting tree structure.

image

Here is how it looks like when we zoom into a particular product. You can see how RavenDB aggregate the data. First in the bottom most page on the right (#596). We aggregate that with the other 367 pages and get intermediate results at page #1410. We then aggregate that again with the intermediate results in page #105127 to get the final tall. In this case, you can see that products/11-A was sold 217,638 times and mostly with products/16-A (30,603 times) and products/72-A (20,603 times).

image

When we have a new order, all we need to do is update a bottom most page and then recurse upward in the three. In the case we have here, there is a pretty big reduce value and we are dealing with tens of millions of orders. We have three levels to the tree, which means that we’ll need to do three update operations to account for new or updated data. That is cheap, because it means that we have to do very little work to maintain the index.

At query time, of course, we don’t really have to do much, all the hard work was done.

I like this example, because it shows case a non trivial example and how RavenDB handles this with ease. These kind of non trivial work is something that tend to be very hard to get working properly and with RavenDB this is part of my default: “let’s do this on the fly demo”.

time to read 2 min | 364 words

imageRavenDB 5.0 has been released and is now available for download and on the cloud. There are many new changes in this version, but the highlights are:

  • Time Series support – allow to store time series data and run queries on time series of any size in milliseconds.
  • Documents compression – will usually reduce your disk utilization by over 50%.
  • Many indexing improvements, especially with regards to indexing and querying date time data.

As I mentioned, you can upgrade to the new version right now on your own instances and you can deploy new cloud instances with RavenDB 5.0 immediately.

RavenDB 5.0 is backward compatible with RavenDB 4.2, you can shut down your RavenDB 4.2 instance, update the binaries and start RavenDB 5.0 and everything will work. The other way around will not work, mind you. Once you have run RavenDB 5.0, you cannot go back to RavenDB 4.2.

A RavenDB 5.0 instance can be part of a cluster running other RavenDB 4.2 nodes (and vice versa), which allows you to do rolling migrations and test the RavenDB 5.0 in production without committing all the way in.

An application using RavenDB 4.x client will be able to just continue working with RavenDB 5.0 server, with no change in behavior. That enables you to just switch over to the new version without needing to modify the whole stack at once.

For users running RavenDB 4.2, I’ll remind you that this is a Long Term Support (LTS) release and it is perfectly fine to remain on that edition for the next year or so.

Users on the cloud that are running RavenDB 4.2 can request an upgrade to RavenDB 5.0 via our support, but for the foreseeable future, we are going to keep users on RavenDB 4.2 unless they ask to be upgraded to RavenDB 5.0.

I’m really happy that we go to this milestone and I wanted to take this opportunity to congratulate the team behind RavenDB 5.0 for an excellent job done. Under non trivial circumstances, we have a pretty amazing product shipping, and I am very proud of what we have shipped.

time to read 1 min | 177 words

RavenDB 5.0 is scheduled to release this week.

It was supposed to be released a week or so ago, but we have a phase in the release which I guess should be called “release the monkeys”. In that phase, we basically have gather around the servers and push. The idea is that we are trying to generate a lot of abuse on the system and see if it will survive the storm of the monkeys.

The version we deployed didn’t survive that test, unfortunately. We had an improperly bounded transaction scope that caused a slow resource leak. Utterly unnoticeable in the grand scheme of things, but enough to cause degradation in resource usage over the course of several days of hard load.

The fix itself was simple enough, but that meant that we had to go back to square one and get a new bunch of monkeys to muck around in the servers.

Barring any new surprises, we expect to be able to certify the current 5.0 build as monkey resistant and set it free to the world.

time to read 2 min | 366 words

I’m teaching a cloud course and I gave the students a task to do. I also now have to go through about fifty projects and evaluate them. I might not have thought this one through Smile.

What is interesting is the approaches that I’m seeing to solve the problem.

There is a wide disparity in the amount of code that people write. Sometimes by an order of magnitude. The students can chose any language, but in practice they almost all chose either Python or Node.  There were very few that used Java (and wow that was painful to read). Two people submitted Java code. One of them gave me a single file, under 100 lines of code. The other gave me a project with 12 files (repository, service, controller, etc) and over 600 lines of code.

Then there were the people who zipped node_modules and sent it over. My poor hard disk… One guy send a zip file that is 173 MB in size, which I’m afraid to look at (it looks like he included terraform.exe and multiple copies of node_modules!).

At any rate, the reason for this post is that among the ~50 home work assignment, there was one that really shined. Not only did the student explain their reasoning in clear and concise manner, they were able to look at the problem from another angle, drastically simply the entire work and avoid a whole set of common pitfalls. They were also able to do that in about 75 lines of code and most of that was required boilerplate.

I just had to go and find the docs to some obscure Python library to figure out how someone can write a solution to the problem in one tenth of the code that everyone else did and do that while behind correct, faster (they implemented caching, which very few did) and much simpler to read and understand. I’m impressed enough to write a blog post about it, after all.

I went over about fifty of assignments, and this one was a pure nugget to keep me slogging through things. There were bugs in the code, mind you, but the architecture and the approach were solid.

time to read 3 min | 452 words

It should come as no surprise that our entire internal infrastructure is running on RavenDB. I wholly believe in the concept of dog fooding and it has serve us very well over the years.

I was speaking to a colleague just now and it occurred to me that it is surprising that we do certain things wrong, intentionally. It is fair to say that we know what the best practices for using RavenDB are, the things that you can do to get the most out of it.

In some of our internal systems, we are doing things in exactly the wrong way. We are doing things that are inefficient in RavenDB. We take the expedient route to implement things.  A good example of that is that we have a set of documents that can grow to be multiple MB in size. They are also some of the most common changed documents in the system. Properly design would call to break them apart to make things easier for RavenDB.

We intentionally modeled things this way. Well, I gave the modeling task to an intern with no knowledge of RavenDB and then I made things worse for RavenDB in a few cases where he didn’t get it out of shape enough for my needs.

Huh?! I can hear you thinking. Why on earth would we do something like that?

We do this because if serves as an excellent proving ground for misuse of RavenDB. It show us how the system behave under non ideal situations. Not just when the user is able to match everything to the way RavenDB would like things to be, but how they are likely to build their system. Unaware of what is going on behind the scenes and what the optimal solution would be. We want RavenDB to be able to handle that scenario well.

An example that pops to mind was having all the uploads on the system be attachments on a single document. That surfaced that we had a O(N^2) algorithm very deep in the bowels of RavenDB for placing a new attachment. It would be completely invisible under normal case, because it was fast enough under any normal or abnormal situation that we could think of. But when we started getting high latency from uploads, we realized that adding the 100,002th attachment to a document required us to scan through the whole list… it was obvious that we needed a fix. (And please, don’t put hundreds of thousands of attachments on a document, it will work (and it is fast now), but it isn’t nice).

Doing the wrong thing on purpose means that we can be sure that when users are doing the wrong thing accidently, they get good behavior.

time to read 3 min | 577 words

email-me-clipart | free clip art from: www.fg-a.com/email1.s… | FlickrWe got a feature request that we don’t intend to implement, but I thought the reasoning is interesting enough for a blog post. The feature request:

If there is a critical error or major issue with the current state of the database, for instance when the data is not replicated from Node C to Node A due to some errors in the database or network it should send out mail to the administrator to investigate on  the issue. Another example is, if the database not active due to some errors then it should send out mail as well.

On its face, the request is very reasonable. If there is an error, we want to let the administrator know about it, not hide it in some log file. Indeed, RavenDB has the concept of alerts just for that reason, to surface any issues directly to the admin ahead of time. We also have a mechanism in place to allow for alerts for the admin without checking in with the RavenDB Studio manually: SNMP. The Simple Network Monitoring Protocol is designed specifically to enable this kind of monitoring and RavenDB expose a lot of state via that which you can act upon in your monitoring system.

Inside your monitoring system, you can define rules that will alert you. Send an SMS if the disk space is low, or email on an alert from RavenDB, etc. The idea of actively alerting the administrator is something that you absolutely want to have.

Having RavenDB send those emails, not so much. RavenDB expose monitoring endpoint and alerts, it doesn’t act or report on them. That is the role of your actual monitoring system. You can setup Zabbix or talk to your Ops team which likely already have one installed.

Let’s talk about the reason that RavenDB isn’t a monitoring system.

Sending email is actually really hard. What sort of email provider do you use? What options are required to set it up a connection? Do you need X509 certificate or user/pass combo? What happens if we can’t send the email? That is leaving aside the fact that actually getting the email delivered is hard enough. Spam, SPF, DKIM and DMARC is where things start. In short, that is a lot of complications that we’ll have to deal with.

For that matter, what about SMS integration? Surely that would also help. But no one uses SMS today, we want WhatsApp integration, and Telegram, and … You go the point.

Then there are social issues. How will we decide if we need to send an email or not? There should be some policy, and ways to configure that. If we won’t have that, we’ll end up sending either too many emails (which will get flagged / ignored) or too few (why aren’t you telling me about XYZ issue?).

A monitoring system is built to handle those sort of issues, it is able to aggregate reports and give you a single email with the current status, open issues for you to fix and do a whole lot more that is simply outside the purview or RavenDB. There is also the most critical alert of all, if RavenDB is down, it will not be able report that it is down because it is down.

The proper way to handle this is to setup integration with a monitoring system, so we’ll not be implementing this feature request.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}