RavenDB allows you to query your data freely and cheaply. It is one of those things that makes or breaks a database, after all.
After over a decade of working with Lucene as our backend indexing engine, we built Corax, a new querying & indexing engine that offers far better performance.
Building an indexing engine is a humongous task. It took us close to ten years from the first line of code to Corax actually shipping. But I’m really happy with the way it turned out.
Building a query engine is a big task, and we focused primarily on making the most common queries fast. The issue at hand is that RavenDB has many features, and we don’t have infinite time. So for the less common features, we typically implemented them as a straightforward port from whatever Lucene is doing.
One such feature is facets. Let’s say that I want to buy a jacket. There are way too many choices, so I can use a faceted query to help me narrow it down. Here is what this looks like in code:
from Products
where search(Description, "suit jacket")select facet(Brand),
facet(Price <200,
Price between 200 and 400,
Price between 400 and 800,
Price >800)
And here is what this looks like as a website:
I mentioned that we implemented some features as a straightforward port from Lucene, right?
We did that because RavenDB offers very rich querying semantics, and we couldn’t spend the time to craft every single bit upfront. The idea was that we would get Corax out the door and be faster in most common scenarios, and at least at parity with everything else.
It works for most scenarios, but not all of them. We recently got a query similar to the one above that was slower in Corax than in Lucene. That is usually good news since we have far more optimization opportunities in Corax. Lucene (and especially our usage of it) has already been through the wringer so many times that it is really hard to eke out any more meaningful performance gains. Corax’s architecture, on the other hand, gives us many more chances to do so.
In the case of facets, the way Lucene handles that is roughly similar to this:
defbrand_facet(matches: List[int]):
facet =dict()for term, docsForTerm in reader.terms("Brand"):
facet[term]= count_intersect(matches,docsForTerm)
Given the results of the query, run over all the terms for a particular field and intersect the documents for every term with the matches for the query. Lucene is able to do that efficiently because it materializes all its data into managed memory. That has costs associated with it:
The benefit of this approach is that many operations are simple, which is great. Corax, on the other hand, does not materialize all its data into managed memory. It uses persistent data structures on disk (leading to reduced memory usage and faster responses on the first query).
The advantage we have with Corax is that the architecture allows us to optimize a lot more deeply. In this case, however, it turned out to be unnecessary, as we are already keeping track of all the relevant information. We just needed to re-implement faceted search in a Corax-native manner.
You can see the changes here. But here is the summary. For a dataset with 10,000,000 records, with hundreds of brands to facet on, we get:
Yes, that isn’t a mistake. Corax is so fast here that you can barely observe it 🙂.
A customer called us about some pretty weird-looking numbers in their system:
You’ll note that the total number of entries in the index across all the nodes does not match. Notice that node C has 1 less entry than the rest of the system.
At the same time, all the indicators are green. As far as the administrator can tell, there is no issue, except for the number discrepancy. Why is it behaving in this manner?
Well, let’s zoom out a bit. What are we actually looking at here? We are looking at the state of a particular index in a single database within a cluster of machines. When examining the index, there is no apparent problem. Indexing is running properly, after all.
The actual problem was a replication issue, which prevented replication from proceeding to the third node. When looking at the index status, you can only see that the entry count is different.
When we zoom out and look at the state of the cluster, we can see this:
There are a few things that I want to point out in this scenario. The problem here is a pretty nasty one. All nodes are alive and well, they are communicating with each other, and any simple health check you run will give good results.
However, there is a problem that prevents replication from properly flowing to node C. The actual details aren’t relevant (a bug that we fixed, to tell the complete story). The most important aspect is how RavenDB behaves in such a scenario.
The cluster detected this as a problem, marked the node as problematic, and raised the appropriate alerts. As a result of this, clients would automatically be turned away from node C and use only the healthy nodes.
From the customer’s perspective, the issue was never user-visible since the cluster isolated the problematic node. I had a hand in the design of this, and I wrote some of the relevant code. And I’m still looking at these screenshots with a big sense of accomplishment.
This stuff isn’t easy or simple. But to an outside observer, the problem started from: why am I looking at funny numbers in the index state in the admin panel? And not at: why am I serving the wrong data to my users.
The design of RavenDB is inherently paranoid. We go to a lot of trouble to ensure that even if you run into problems, even if you encounter outright bugs (as in this case), the system as a whole would know how to deal with them and either recover or work around the issue.
As you can see, live in production, it actually works and does the Right Thing for you. Thus, I can end this post by saying that this behavior makes me truly happy.
We recently got a support request from a user in which they had the following issue:
We have an index that is using way too much disk space. We don’t need to search the entire dataset, just the most recent documents. Can we do something like this?
from d in docs.Events
where d.CreationDate >= DateTime.UtcNow.AddMonths(-3)selectnew{ d.CreationDate, d.Content };
The idea is that only documents from the past 3 months would be indexed, while older documents would be purged from the index but still retained.
The actual problem is that this is a full-text search index, and the actual data size required to perform a full-text search across the entire dataset is higher than just storing the documents (which can be easily compressed).
This is a great example of an XY problem. The request was to allow access to the current date during the indexing process so the index could filter out old documents. However, that is actually something that we explicitly prevent. The problem is that the current date isn’t really meaningful when we talk about indexing. The indexing time isn’t really relevant for filtering or operations, since it has no association with the actual data.
The date of a document and the time it was indexed are completely unrelated. I might update a document (and thus re-index it) whose CreationDate is far in the past. That would filter it out from the index. However, if we didn’t update the document, it would be retained indefinitely, since the filtering occurs only at indexing time.
Going back to the XY problem, what is the user trying to solve? They don’t want to index all data, but they do want to retain it forever. So how can we achieve this with RavenDB?
Data Archiving in RavenDB
One of the things we aim to do with RavenDB is ensure that we have a good fit for most common scenarios, and archiving is certainly one of them. In RavenDB 6.0 we added explicit support for Data Archiving.
When you save a document, all you need to do is add a metadata element: @archive-at and you are set. For example, take a look at the following document:
This document is set to be archived on Nov 1st, 2024. What does that mean?
From that day on, RavenDB will automatically mark it as an archived document, meaning it will be stored in a compressed format and excluded from indexing by default.
You can decide (on a per-index basis) whether to include archived documents in the index. This gives you a very high level of flexibility without requiring much manual effort.
In short, for this scenario, you can simply tell RavenDB when to archive the document and let RavenDB handle the rest. RavenDB will do the right thing for you.
I’m currently deep in the process of modifying the internals of Voron, trying to eke out more performance out of the system. I’m making great progress, but I’m also touching parts of the code that haven’t even been looked at for a long time.
In other words, I’m mucking about with the most stable and most critical portions of the storage engine. It’s a lot of fun, and I’m actually seeing some great results, but it is also nerve-wracking.
We have enough tests that I’ve great confidence I would catch any actual stability issues, but the drive back toward a fully green build has been a slog.
The process is straightforward:
Change something.
Verify that it works better than before.
Run the entire test suite (upward of 30K tests) to see if there are any breaks.
The last part can be frustrating because it takes a while to run this sort of test suite. That would be bad enough, but some of the changes I made were things like marking a piece of memory that used to be read/write as read-only. Now any access to that memory would result in an access violation.
I fixed those in the code, of course, but we have a lot of tests, including some tests that intentionally corrupt data to verify that RavenDB behaves properly under those conditions.
One such test writes garbage to the RavenDB file, using read-write memory. The idea is to verify that the checksum matches on read and abort early. Because that test directly modifies what is now read-only memory, it generates a crash due to a memory access violation. That doesn’t just result in a test failure, it takes the whole process down.
I’ve gotten pretty good at debugging those sorts of issues (--blame-crash is fantastic) and was able to knock quite a few of them down and get them fixed.
And then there was this test, which uses encryption-at-rest. That test started to fail after my changes, and I was pretty confused about exactly what was going on. When trying to read data from disk, it would follow up a pointer to an invalid location. That is not supposed to happen, obviously.
Looks like I have a little data corruption issue on my hands. The problem is that this shouldn’t be possible. Remember how we validate the checksum on read? When using encryption-at-rest, we are using a mechanism called AEAD (Authenticated Encryption with Associated Data). That means that in order to successfully decrypt a page of data from disk, it must have been cryptographically verified to be valid.
My test results showed, pretty conclusively, that I was generating valid data and then encrypting it. The next stage was to decrypt the data (verifying that it was valid), at which point I ended up with complete garbage.
RavenDB trusts that since the data was properly decrypted, it is valid and tries to use it. Because the data is garbage, that leads to… excitement. Once I realized what was going on, I was really confused. I’m pretty sure that I didn’t break 256-bit encryption, but I had a very clear chain of steps that led to valid data being decrypted (successfully!) to garbage.
It was also quite frustrating to track because any small-stage test that I wrote would return the expected results. It was only when I ran the entire system and stressed it that I got this weird scenario.
I started practicing for my Fields medal acceptance speech while digging deeper. Something here had to be wrong. It took me a while to figure out what was going on, but eventually, I tracked it down to registering to the TransactionCommit event when we open a new file.
The idea is that when we commit the transaction, we’ll encrypt all the data buffers and then write them to the file. We register for an event to handle that, and we used to do that on a per-file basis. My changes, among other things, moved that logic to apply globally.
As long as we were writing to a single file, everything just worked. When we had enough workload to need a second file, we would encrypt the data twice and then write it to the file. Upon decryption, we would successfully decrypt the data but would end up with still encrypted data (looking like random fluff).
The fix was simply moving the event registration to the transaction level, not the file level. I committed my changes and went back to the unexciting life of bug-fixing, rather than encryption-breaking and math-defying hacks.
I was talking to a colleague about a particular problem we are trying to solve. He suggested that we solve the problem using a particular data structure from a recently published paper. As we were talking, he explained how this data structure works and how that should handle our problem.
The solution was complex and it took me a while to understand what it was trying to achieve and how it would fit our scenario. And then something clicked in my head and I said something like:
Oh, that is just epoch-based, copy-on-write B+Tree with single-producer/ concurrent-readers?
If this sounds like nonsense to you, it is fine. Those are very specific terms that we are using here. The point of such a discussion is that this sort of jargon serves a very important purpose. It allows us to talk with clarity and intent about fairly complex topics, knowing that both sides have the same understanding of what we are actually talking about.
The idea is that we can elevate the conversation and focus on the differences between what the jargon specifies and the topic at hand. This is abstraction at the logic level, where we can basically zoom out a lot of details and still keep high intent accuracy.
Being able to discuss something at this level is hugely important because we can convey complex ideas easily. Once I managed to put what he was suggesting in context that I could understand, we were able to discuss the pros and cons of this data structure for the scenario.
I do appreciate that the conversation basically stopped making sense to anyone who isn’t already well-versed in the topic as soon as we were able to (from my perspective) clearly and effectively communicate.
“When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.”
Clarity of communication is a really important aspect of software engineering. Being able to explain, hopefully in a coherent fashion, why the software is built the way it is and why the code is structured just so can be really complex. Leaning on existing knowledge and understanding can make that a lot simpler.
There is also another aspect. When using jargon like that, it is clear when you don’t know something. You can go and research it. The mere fact that you can’t understand the text tells you both that you are missing information and where you can find it.
For software, you need to consider two scenarios. Writing code today and explaining how it works to your colleagues, and looking at code that you wrote ten years ago and trying to figure out what was going on there.
In both cases, I think that this sort of approach is a really useful way to convey information.
“This is Old Code” is a programmer’s idiom meaning “There Be Dragons”. The term “Legacy Code” is a nice way to say “Don’t make me go there”
Those are very strange statements when you think about it. Code is code, just ones & zeros stored on a disk somewhere. It doesn’t go bad over time.
When you write a line of code, it doesn’t have an expiration date, after all. For food, it makes sense, there are bacteria and such that would make it go bad. But what is it about old code that is so problematic?
I want to take a look at a couple of examples of old code and examine how they stood the test of time. I chose those two projects because there has been no activity on either project since about 2014 or so.
No meaningful activity or changes for the past decade is a great place to start looking at code rots. Note that I’m not looking at the age of a codebase, but whether it was left to pasture long enough to exhibit code rot issues.
Rhino.Mocks is a mocking framework for .NET that I spent close to a decade working on. It was highly popular for several years and was quite capable. The vast majority of the code, written about 15 years ago, is now frozen, and I haven’t touched it since around 2016.
I was able to clone the Rhino Mocks repository, run the build script and the tests in a single command. However… trying to actually use this in a modern system would result in an error similar to this one:
Method not found:'System.Reflection.Emit.AssemblyBuilder System.AppDomain.DefineDynamicAssembly(System.Reflection.AssemblyName, System.Reflection.Emit.AssemblyBuilderAccess)'.'
Internally, Rhino Mocks does dynamic code generation, which relies on very low level APIs. Apparently, these APIs are not consistent between .NET Framework and .NET Core / the new .NET. To get Rhino Mocks working on the current version of .NET, we would need to actually fix those issues.
That would require someone who understands how dynamic code generation and IL emitting work. I remember facing a lot of InvalidProgramException in the past, so that isn’t a generally applicable skill.
ALICE is a tool for checking the crash correctness of databases and similar applications. It has a really interesting paper associated with it and was used to find several consistency issues with many systems (from databases to Git and Mercurial). The code was last touched in 2015 but the actual work appears to have happened just over ten years ago.
ALICE made quite a splash when it came out, and many projects tried to test it against themselves to see if there were any issues with their usage of the file system APIs.
Trying to run ALICE today, you’ll run into several problems. It uses Python 2.x, which is no longer supported. Moving it to Python 3.x was not a big deal, but a much bigger problem is that ALICE is very closely tied to the syscalls of the kernel (it monitors them to see how the application uses the file system API).
Since ALICE was released, new syscalls were introduced, and the actual I/O landscape has changed quite a bit (for example, with IO_Uring). Making it work, even for a relatively small test case, was not a trivial task.
The most interesting aspect of this investigation was not the particular problems that I found, but actually figuring out what is the process of addressing them. Just updating the code to the latest version is a mechanical process that is pretty easy.
Updating the actual behavior, however, would require a high degree of expertise. Furthermore, it would also need a good understanding and insight into the codebase and its intended effects. A codebase that hasn’t been touched in a long while is unlikely to have such insight.
When we talk about a codebase rotting, we aren’t referring to the source files picking up viruses or the like, we are discussing the loss of information about what the code is actually doing. Even worse, even if we can follow what the code is doing, understanding how to modify it is a high-complexity task.
What about ongoing projects? Projects that have continuous updates and dedicated team members associated with them. It turns out that they can rot as well. Here is an example taken from the RavenDB codebase. This is a pretty important method as it adds an item to a B+Tree, which is quite a common (and important) operation in a database:
You can see that this isn’t much of a function, most of the behavior happens elsewhere. However, you can see that this code has been around for a while. It was modified by four different people over the span of a decade. It is also pretty stable code, in terms of the number of changes that happened there.
This is a small function, but you can see it pretty clearly when you are looking at the code at large. There are whole sections that are just… there. They are functional and work well, and no one needs to touch them for a very long period of time. Occasionally, we make minor changes, but for the most part, they are not touched much at all.
How does that play into the notion of code rot? The code wouldn’t suffer as badly as the examples above, of course, since it is still being run and executed on an ongoing basis. However, the understanding of the code is definitely diminished.
The question is, do we care? Those are the stable parts, the ones we don’t need to touch. Until we do… that is, and what happens then?
Just making changes in our codebase for the sake of making changes is a bad idea. But going into the codebase and leaving it in a better state than before is a good practice. This helps ensure it doesn’t become a daunting ‘there be dragons’ scenario.
I usually talk about the things that I do that were successful. Today I want to discuss something that I tried but failed at. Documenting failed approaches is just as important, though less enjoyable, as documenting what we excel at.
In order to explain the failure, I need to get a bit deeper into how computers handle memory. There is physical memory, the RAM sticks that you have in your machine, and then there is how the OS and CPU present that memory to your code. Usually, the abstraction is quite seamless, and we don’t need to pay attention to it.
Occasionally, we can take advantage of this model. Consider the following memory setup, showing a single physical memory page that was mapped in two different locations:
In this case, it means that you can do things like this:
In other words, because the two virtual pages point to the same physical page in memory, we can modify memory in one location and see the changes in another. This isn’t spooky action at a distance, it is simply the fact that the memory addresses we use are virtual and they point to the same place.
Note that in the image above, I modified the data using the pointer to Page 1 and then read it from Page 2. The Memory Management Unit (MMU) in the CPU can do a bunch of really interesting things because of this. You’ll note that each virtual page is annotated with an access permission.
In this case, the second page is marked as Copy on Write. That means that when we read from this page, the MMU will happily read the data from the physical page it is pointed to. But when we write, the situation is different.
The MMU will raise an exception to the operating system, telling it that a write was attempted on this page, which is forbidden. At this point, the OS will allocate a new physical page, copy the data to it, and then update the virtual address to point to the new page. Here is what this looks like:
Now we have two distinct mappings. A write to either one of them will not be reflected on the other. Here is what this looks like in code:
*page1 ='1'; // now
printf("Page1: %c, Page2: %c\n", *page1, *page2);
// output: Page1: 1, Page2: 1
*page2 ='2'; // force the copy on write to occur
printf("Page1: %c, Page2: %c\n", *page1, *page2);
// output: Page1: 1, Page2: 2
As long as the modifications happened through the first page address (the orange one in the image), there was no issue and any change would be reflected in both pages. When we make a modification to the second page (the green one in the image), the OS will create a new physical page and effectively split them forever.
Changes made to either page will only be reflected in that page, not both, since they aren’t sharing the same page.
Note that this behavior applies at a page boundary. What happens if I have a buffer, 1GB in size, and I use this technique on it? Let’s assume that we have a buffer that is 1GB in size and I created a copy-on-write mapping on top of it.
The amount of physical memory that I would consume is still just 1GB.
In fact, I would effectively memcpy()very quickly, since I’m not actually copying anything. And for all intents and purposes, it works. I can change the data through the second buffer, and it would not show up in the first buffer. Of particular note is that when I modify the data on the second buffer, only a single page is changed. Here is what this looks like:
So instead of having to copy 1GB all at once, we map the buffer again as copy on write, and we can get a new page whenever we actually modify our “copy” of the data.
So far, this is great, and it is heavily used for many optimizations. It is also something that I want to use to implement cheap snapshots of a potentially large data structure. The idea that I have is that I can use this technique to implement it.
Here is the kind of code that I want to write:
var list =newCopyOnWriteList();list.Put(1);list.Put(2);var snapshot1 =list.CreateSnapshot();list.Put(3)var snapshot2 =list.CreateSnapshot();list.Put(4);
And the idea is that I’ll have (at the same time) the following:
list
snapshot1
snapshot2
1,2,3,4
1,2
1,2,3
I want to have effectively unlimited snapshots, and the map may contain a large amount of data. In graphical form, you can see it here:
We started with Page 1, created a Copy of Write for Page 2, modified Page 2 (breaking the Copy on Write), and then attempted to create a Copy on Write for Page 2. That turns out to be a problem.
Let’s see the code that we need in order to create a copy using copy-on-write mapping on Linux:
Take a look at the API we have for creating a copy-on-write:
MapViewOfFile(hMapFile, FILE_MAP_COPY, 0, 0, 4096); // windows
mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE, shm_fd, 0); // linux
A key aspect of the API is that we need to provide a source for the Copy-on-Write operation. That means that we can only create a Copy-on-Write from a single source. We cannot perform a Copy-on-Write on top of a page that was marked as copy-on-write. This is because we cannot refer to it. Basically, I don’t have a source that I can use for this sort of mapping.
I tried being clever and wrote the following code on Linux:
On Linux, you can use the special file /proc/self/mem to refer to your memory using file I/O. That means that I can get a file descriptor for my own memory, which provides a source for my copy-on-write operation.
I was really excited when I realized that this was a possibility. I spent a lot of time trying to figure out how I could do the same on Windows. However, when I actually ran the code on Linux, I realized that this doesn’t work.
The mmap() call will return ENODEV when I try that. It looks like this isn’t a supported action.
Linux has another call that looks almost right, which is mremap(), but that either zeros out or sets up a userfaulfdhandler for the region. So it can’t serve my needs.
This is quite annoying since we are almost there. All the relevant pieces are available, if we had a way to tell the kernel to create the mapping, everything else should just work from there.
Anyway, this is my tale of woe, trying (and failing) to create a snapshot-based system using the Memory Manager Unit. Hopefully, you’ll either learn something from my failure or let me know that there is a way to do this…
Reading code is a Skill (with a capital letter, yes) that is really important for developers. You cannot be a good developer without it.
Today I want to talk about one aspect of this. The ability to go into an unfamiliar codebase and extract one piece of information out. The idea is that we don’t need to understand the entire system, grok the architecture, etc. I want to understand one thing about it and get away as soon as I can.
For example, you know that project Xyz is doing some operation, and you want to figure out how this is done. So you need to look at the code and figure that out, then you can go your merry way.
LMDB is an embedded database engine (similar to Voron, and in fact, Voron is based on some ideas from LMDB) written in C. If you are interested in it, I wrote 11 posts going through every line of code in the project.
So I’m familiar with the project, but the last time I read the code was over a decade ago. From what I recall, the code is dense. There are about 11.5K lines of code in a single file, implementing the entire thing.
The first thing to do is find the relevant section in the code. I started by searching for the WriteFile() function, the Win32 API to write. The first occurrence of a call to this method is in the mdb_page_flush function.
I look at this code, and… there isn’t really anything there. It is fairly obvious and straightforward code (to be clear, that is a compliment). I was expecting to see a trick there. I couldn’t find it.
That meant either the code had a gaping hole and potential data corruption (highly unlikely) or I was missing something. That led me to a long trip of trying to distinguish between documented guarantees and actual behavior.
The documentation for MapViewOfFile is pretty clear:
A mapped view of a file is not guaranteed to be coherent with a file that is being accessed by the ReadFile or WriteFile function.
I have my own run-ins with this behavior, which was super confusing. This means that I had experimental evidence to say that this is broken. But it didn’t make sense, there was no code in LMDB to handle it, and this is pretty easy to trigger.
It turns out that while the documentation is pretty broad about not guaranteeing the behavior, the actual issue only occurs if you are working with remote files or using unbuffered I/O.
If you are working with local files and buffered I/O (which is 99.99% of the cases), then you can rely on this behavior. I found some vaguereferences to this, but that wasn’t enough. There is this post that is really interesting, though.
I pinged Howard Chu, the author of LMDB, for clarification, and he was quick enough to assure me that yes, my understanding was (now) correct. On Windows, you can mix memory map operations with file I/O and get the right results.
The documentation appears to be a holdover from Windows 9x, with the NT line always being able to ensure coherency for local files. This is a guess about the history of documentation, to be honest. Not something that I can verify.
I had the wrong information in my head for over a decade. I did not expect this result when I started this post, I was sure I would be discussing navigating complex codebases. I’m going to stand in the corner and feel upset about this for a while now.
In my previous post, I explained what we are trying to do. Create a way to carry a dictionary between transactions in RavenDB, allowing one write transaction to modify it while all other read transactions only observe the state of the dictionary as it was at the publication time.
I want to show a couple of ways I tried solving this problem using the built-in tools in the Base Class Library. Here is roughly what I’m trying to do:
IEnumerable<object>SingleDictionary(){var dic =newDictionary<long, object>();var random =newRandom(932);var v =newobject();// number of transactionsfor(var txCount =0; txCount <1000; txCount++){// operations in transactionfor(int opCount =0; opCount <10_000; opCount++){
dic[random.NextInt64(0,1024*1024*1024)]= v;}yieldreturn dic;// publish the dictionary}}
As you can see, we are running a thousand transactions, each of which performs 10,000 operations. We “publish” the state of the transaction after each time.
This is just to set up a baseline for what I’m trying to do. I’m focusing solely on this one aspect of the table that is published. Note that I cannot actually use this particular code. The issue is that the dictionary is both mutable and shared (across threads), I cannot do that.
The easiest way to go about this is to just clone the dictionary. Here is what this would look like:
IEnumerable<object>ClonedDictionary(){
var dic = new Dictionary<long, object>();
var random = new Random(932);
var v= new object();
// number of transactions
for(var txCount =0; txCount <1000; txCount++){
// operations in transaction
for(int opCount =0; opCount < 10_000; opCount++){
dic[random.NextInt64(0, 1024 * 1024 * 1024)]=v;}
// publish the dictionary
yield return new Dictionary<long, object>(dic);}}
This is basically the same code, but when I publish the dictionary, I’m going to create a new instance (which will be read-only). This is exactly what I want: to have a cloned, read-only copy that the read transactions can use while I get to keep on modifying the write copy.
The downside of this approach is twofold. First, there are a lot of allocations because of this, and the more items in the table, the more expensive it is to copy.
I can try using the ImmutableDictionary in the Base Class Library, however. Here is what this would look like:
IEnumerable<object>ClonedImmutableDictionary(){var dic =ImmutableDictionary.Create<long, object>();var random =newRandom(932);var v =newobject();// number of transactionsfor(var txCount =0; txCount <1000; txCount++){// operations in transactionfor(int opCount =0; opCount <10_000; opCount++){
dic =dic.Add(random.NextInt64(0,1024*1024*1024), v);}// publish the dictionaryyieldreturn dic;}}
The benefit here is that the act of publishing is effectively a no-op. Just send the immutable value out to the world. The downside of using immutable dictionaries is that each operation involves an allocation, and the actual underlying implementation is far less efficient as a hash table than the regular dictionary.
I can try to optimize this a bit by using the builder pattern, as shown here:
IEnumerable<object>BuilderImmutableDictionary(){var builder = ImmutableDictionary.CreateBuilder<long, object>();var random =newRandom(932);var v =newobject();;// number of transactionsfor(var txCount =0; txCount <1000; txCount++){// operations in transactionfor(int opCount =0; opCount <10_000; opCount++){
builder[random.NextInt64(0,1024*1024*1024)]= v;}// publish the dictionaryyieldreturn builder.ToImmutable();}}
Now we only pay the immutable cost one per transaction, right? However, the underlying implementation is still an AVL tree, not a proper hash table. This means that not only is it more expensive for publishing the state, but we are now slower for reads as well. That is not something that we want.
The BCL recently introduced a FrozenDictionary, which is meant to be super efficient for a really common case of dictionaries that are accessed a lot but rarely written to. I delved into its implementation and was impressed by the amount of work invested into ensuring that this will be really fast.
Let’s see how that would look like for our scenario, shall we?
IEnumerable<object>FrozenDictionary(){var dic =newDictionary<long, object>();var random =newRandom(932);var v =newobject();// number of transactionsfor(var txCount =0; txCount <1000; txCount++){// operations in transactionfor(int opCount =0; opCount <10_000; opCount++){
dic[random.NextInt64(0,1024*1024*1024)]= v;}// publish the dictionaryyieldreturn dic.ToFrozenDictionary();}}
The good thing is that we are using a standard dictionary on the write side and publishing it once per transaction. The downside is that we need to pay a cost to create the frozen dictionary that is proportional to the number of items in the dictionary. That can get expensive fast.
After seeing all of those options, let’s check the numbers. The full code is in this gist.
I executed all of those using Benchmark.NET, let’s see the results.
Method
Mean
Ratio
SingleDictionaryBench
7.768 ms
1.00
BuilderImmutableDictionaryBench
122.508 ms
15.82
ClonedImmutableDictionaryBench
176.041 ms
21.95
ClonedDictionaryBench
1,489.614 ms
195.04
FrozenDictionaryBench
6,279.542 ms
807.36
ImmutableDictionaryFromDicBench
46,906.047 ms
6,029.69
Note that the difference in speed is absolutely staggering. The SingleDictionaryBench is a bad example. It is just filling a dictionary directly, with no additional cost. The cost for the BuilderImmutableDictionaryBench is more reasonable, given what it has to do.
Just looking at the benchmark result isn’t sufficient. I implemented every one of those options in RavenDB and ran them under a profiler. The results are quite interesting.
Here is the version I started with, using a frozen dictionary. That is the right data structure for what I want. I have one thread that is mutating data, then publish the frozen results for others to use.
However, take a look at the profiler results! Don’t focus on the duration values, look at the percentage of time spent creating the frozen dictionary. That is 60%(!) of the total transaction time. That is… an absolutely insane number.
Note that it is clear that the frozen dictionary isn’t suitable for our needs here. The ratio between reading and writing isn’t sufficient to justify the cost. One of the benefits of FrozenDictionary is that it is more expensive to create than normal since it is trying hard to optimize for reading performance.
What about the ImmutableDictionary? Well, that is a complete non-starter. It is taking close to 90%(!!) of the total transaction runtime. I know that I called the frozen numbers insane, I should have chosen something else, because now I have no words to describe this.
Remember that one problem here is that we cannot just use the regular dictionary or a concurrent dictionary. We need to have a fixed state of the dictionary when we publish it. What if we use a normal dictionary, cloned?
This is far better, at about 40%, instead of 60% or 90%.
You have to understand, better doesn’t mean good. Spending those numbers on just publishing the state of the transaction is beyond ridiculous.
We need to find another way to do this. Remember where we started? The PageTable in RavenDB that currently handles this is really complex.
I looked into my records and found this blog post from over a decade ago, discussing this exact problem. It certainly looks like this complexity is at least semi-justified.
I still want to be able to fix this… but it won’t be as easy as reaching out to a built-in type in the BCL, it seems.
At the heart of RavenDB, there is a data structure that we call the Page Translation Table. It is one of the most important pieces inside RavenDB.
The page translation table is basically a Dictionary<long, Page>, mapping between a page number and the actual page. The critical aspect of this data structure is that it is both concurrent and multi-version. That is, at a single point, there may be multiple versions of the table, representing different versions of the table at given points in time.
The way it works, a transaction in RavenDB generates a page translation table as part of its execution and publishes the table on commit. However, each subsequent table builds upon the previous one, so things become more complex. Here is a usage example (in Python pseudo-code):
table ={}
with wtx1 = write_tx(table):
wtx1.put(2, 'v1')
wtx1.put(3, 'v1')
wtx1.publish(table)# table has (2 => v1, 3 => v1)
with wtx2 = write_tx(table):
wtx2.put(2, 'v2')
wtx2.put(4, 'v2')
wtx2.publish(table)# table has (2 => v2, 3 => v1, 4 => v2)
This is pretty easy to follow, I think. The table is a simple hash table at this point in time.
The catch is when we mix read transactions as well, like so:
# table has (2 => v2, 3 => v1, 4 => v2)
with rtx1 = read_tx(table):
with wtx3 = write_tx(table):
wtx3.put(2, 'v3')
wtx3.put(3, 'v3')
wtx3.put(5, 'v3')
with rtx2 = read_tx(table):
rtx2.read(2)# => gives, v2
rtx2.read(3)# => gives, v1
rtx2.read(5)# => gives, None
wtx3.publish(table)# table has (2 => v3, 3 => v3, 4 => v2, 5 => v3)# but rtx2 still observe the value as they were when# rtx2 was created
rtx2.read(2)# => gives, v2
rtx2.read(3)# => gives, v1
rtx2.read(5)# => gives, None
In other words, until we publish a transaction, its changes don’t take effect. And any read translation that was already started isn’t impacted. We also need this to be concurrent, so we can use the table in multiple threads (a single write transaction at a time, but potentially many read transactions). Each transaction may modify hundreds or thousands of pages, and we’ll only clear the table of old values once in a while (so it isn’t infinite growth, but may certainly reach respectable numbers of items).
The implementation we have inside of RavenDB for this is complex! I tried drawing that on the whiteboard to explain what was going on, and I needed both the third and fourth dimensions to illustrate the concept.
Given these requirements, how would you implement this sort of data structure?