Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 462 words

In my previous post, I discussed some options for changing the syntax of graph queries in RavenDB from Cypher to be more in line with the rest of the RavenDB Query Language. We have now completed that part and can see the real impact it has on the overall design.

In one of the design review, one of the devs (who have built non trivial applications using Neo4J) complained that the syntax is now much longer. Here are the before and after queries to compare:

image

The key, from my perspective, is that the new form is more explicit and easier to read after the fact. Queries tend to grow more complex over time, and they are being read a lot more often than written). As such, I absolutely want to lean toward being readable over being terse.

The example above just show the extra characters that you need to write. Let’s talk about something that is a bit more complex:

image

Now we have a lot more text, but it is a lot easier to understand what is going on. Focus especially on the Lines edge, where we can very clearly separate what constitute the selection on the edge, the filter on the edge and what is the property that contains the actual linked document id.

The end result is that we now have a syntax that is a lot more consistent and approachable. There are other benefits, but I’ll show them off in the next post.

A major source of annoyance for me with this syntax was how to allow anonymous aliases. In the Cypher syntax we used, you could do something like:

image

There is a problem with how to express this kind of syntax of anonymous aliases with the Collection as alias mode. I initially tried to make it work by saying that we’ll look at the rest of the query and figure it out. But that just felt wrong. I didn’t like this inconsistency. I want a parse tree that I can look at in isolation and know what is going on. Simplifying the language is something that pays dividends over time, so I eventually decided that the query above will look this with the next syntax:

image

There is a lot of precedence of using underscore as the “I don’t care” marker, so that works nice and resolves any ambiguities in the syntax.

time to read 5 min | 981 words

When we started building support for graph queries inside RavenDB, we looked at what is the state of the market in this regard. There seems to be two major options: Cypher and Gremlins. Gremlins is basically a fluent interface that represent a specific graph pattern while Cypher is a more abstract manner to represent the graph query. I don’t like Gremlins, and it doesn’t fit into the model we have for RQL, so we went for the Cypher syntax. Note the distinction between went for Cypher and went for Cypher syntax.

One of the major requirements that we have is fitting in into the pre-existing Raven Query Language, but the first concern we had was just getting started and getting some idea about our actual scenarios. We are now at the point where we have written a bunch of graph queries and got a lot more experience in how it mesh into the overall environment. And at this point, I can really feel that there is an issue in meshing Cypher syntax into RQL. They don’t feel the same at all. There are a lot of good ideas there, make no mistake, but we want to create something that would flow as a cohesive whole.

Let’s look at some of our queries and how we can better express them. The one I talked to about the most is this:

image

Let see what we have here:

  • match is the overall clause that apply a graph pattern query to the dataset.
  • () – is an indication of a node in the graph.
  • [] – is an indication of an edge.
  • a:Dogs, l:Likes and b:Dogs – this is an alias and a path specification.
  • -[]-> – is an indication of an edge between two nodes
  • (expression) – is a filter on a node or an edge

I’m ignoring the select statement here because it is just the usual RQL select statement.

The first thing that keeps biting us is the filter in (a:Dogs (id() = 'dogs/arava')), I keep being tripped by missing the closing ), so that has got to go. Luckily, is it very obvious what to do here:

image

We use an explicit where clause, instead of the () to express the inline filter. This fits a lot more closely with how the rest of RQL works.

Now, let’s look at the aliases: (b:Dogs). The alias:Collection syntax is pretty foreign to RQL, we tend to use the Collection as alias syntax. Let’s see how that would look like, shall we?

image

This looks a lot more natural to me, and it is a good fit into RQL in general. This syntax does bring a few things to the table. In particular, look a the edge. In Cypher, an anonymous edge would be: [:Likes], and using this method, we will have just [Likes].

However, as nice as this syntax is, we still run into a problem. The query above is actually just a shorthand way to write the full query, which looks like so:

image

In fact, we have two queries here, to show off the actual problem we have in parsing. In the first case, we have a match clause the only refers to explicit with statement. On the second case, we have a couple of explicit with statements, but also an implicit with edges expression (the Likes).

From the point of view of the parser, we can’t distinguish those two. Now, we can absolutely say that if the edge expression contains a single name, we’ll simply look for an edge with that name and otherwise assume that this is the path that will be used.

But this seems to be error prone, because you might have a small typo or remove a edge statement and get a completely different (and unexpected) meaning. I thought about adding some sort of prefix to help tell an alias from an implicit definition, but that looks very ugly, see:

image 

And on the other hand, I really like the –[Likes]-> syntax in general. It is a lot cleaner and easier to read.

At this point, I don’t have a solution for this. I think we’ll go with the mode in which we can’t tell what the query is meant to say just from the parser, and look at the explicit with statements to figure it out (with the potential for mistakes that I pointed out earlier) until we can figure out something better.

One thing that I’m thinking about is that the () and [] which help distinguish between nodes and edges, aren’t actually required for us if we have an explicit statement. So we can write it like so:

image

In this manner, we can tell, quite easily, if you meant to define an implicit edge / node or refers to an explicitly defined alias. I’m not sure whatever this would be a good idea, though.

Another issue we have to deal with is:

image

Note that in this case, we have a filter expression on the edge as well. Applying the same process we have done so far, we get:

image

The advantages here is that this is very clear and obvious about what is going on. The disadvantage is that this takes quite a bit longer to express.

time to read 2 min | 279 words

imageAn interesting challenge with implementing graph queries is that you sometimes get into situations where the correct behavior is counter intuitive.

Consider the case of the graph on the right and the following query:

image

This will return:

  • Source: Arava, Destination: Oscar

But what would be the value of the Edge property? The answer to that is… complicated.  What we actually return is the edge itself. Let’s see what I mean by that.

image

And, indeed, the value of Edge in this query is going to be dogs/oscar.

image

This isn’t very helpful if we are talking about a simple edge like this. After all, we can deduce this from the Src –> Destination pair. This gets more interesting when the edge is more complex. Consider the following query:

image

What do you this should be the output here? In this case, the edge isn’t the Product property, it is the specific line that match the filter on the edge. Here is what the result looks like:

image

As you can imagine, knowing exactly what edge led you from one document to another can be very useful when you look at the query results.

time to read 2 min | 258 words

I was busy working on the implementation on filtering in graph queries, as discussed in my previous post. What I ended up implementing is a way for the user to tell us exactly how to handle the results. The actual query we ended up with is this:

image

And the key part here is the where clause, were we state that a and c cannot be the same dog. This also matches the behavior of SQL, and for that reason allow (predictably), that’s a good idea.

However, I didn’t just implement inequity, I implement full filtering capabilities, and you can access anything in the result. Which means that this query is now also possible:

image

I’ll let you a moment to analyze this query in peace. Try to de-chyper it (pun intended).

What this  query is doing is to compare the actual sale price and the regular price of product on a particular order, for products that match a particular set of categories.

This is a significant query because, for the first time in RavenDB, you have the ability to perform such a query (previous, you would have had to define a specific index for this query).

In other words, what graph query filtering brings to the table is joins. And I did not set out to build this feature and I’m feeling very strange about it.

time to read 3 min | 578 words

imageRavenDB is pretty big, it is over 600,000 lines of C# code and over 220,000 lines of TypeScript code. In a codebase that large, there are unexpected interactions between different components (written by us, third parties and even the operating systems we use).

Given how important the stability of RavenDB is, we spend quite a bit of time (as in, the majority of it) not writing new features, but ensuring that the system is stable, predictable and observable. One part of that is a large suite of tests, which are being run on a variety of machines and conditions.

Some of these tests fail, in which case we fix them. A failing test is wonderful, because it tell us that something is wrong. A predictably failing test is a pleasure, because it states, in an unambiguous terms, what is going on and what the trouble is. I love getting a failing test, there is usually a pretty straightforward way to figure out what went wrong and then to actually fixing it.

Then there are tests that fail occasionally, and I really hate them. They almost always relate to some sort of race condition. Sometimes, the race is in the test itself, but sometimes the problem is in the actual code. The problem is that tracking down such an issue is pretty hard and annoying. The more frequently can we induce the failure, the faster we can actually get to resolving it.

We recently had a test that failed, very rarely, and only on Linux.

The debugging landscape* on Linux is dramatically poorer compare to Windows, so that adds another hurdle.

* Yes, we have JetBrains’ Rider, and it is great. But it is still quite far from the debugging capabilities of Visual Studio, especially for non trivial debugging.

The test failed because of a timeout waiting for a cluster to fully disseminate changes between all the members in the cluster. That means that we had a test that would spin up three to five independent nodes, combine them into a cluster, create a database that is shared among all these nodes, write documents to one of the nodes and then validate that the document is indeed on all the nodes.  A failure there, and a timeout failure in that aspect, means that we have to inspect pretty much the whole system.

Luckily, we had some good people on this issue, and they manage to come up with a minimal reproduction of the issue. All it took was to spin up a TcpListener and TcpClient and have them talk to one another, then do the same using SSL. We got some really interesting results because of that.

Windows Linux Diff
Single Threaded – Plain

192.8

200.8

104%

Single Threaded – SSL

5,762.3

667,549.8

11,584%

Concurrent (200) – Plain

11,377.5

932,487.9

8,195%

Concurrent (200) – SSL

145,494.8

35,283,175.3

24,250%

As you can see, there is a minor discrepancy in the performance of TCP connection times. All the tests were run on the same machine, testing over localhost.

We opened an issue for this problem, and for now we deal with it by accepting that the connection time can be very long and adjusted the timeout for the test. 

time to read 2 min | 220 words

image“I just found out that you can do Dancing Rhinos in 4D if you use FancyDoodad 2.43.3” started a conversation at the office. That is pretty cool, I’ll admit, getting Rhinos to dance at all is nice, and in 4D is probably nicer. I wasn’t aware that FancyDoodad had this feature at all. Great tidbit and something to discuss over lunch or coffee.

The problem is that the follow up was something in the order: “Now I wonder how we can use FancyDoodad’s cool feature for us. Do you think it can solve the balance issue for this problem?”

Well, this problem has nothing to do with Rhinos, wildlife, dancing or (hopefully) dimensional math. So while I can see that if you had a burning enough desire and only a hammer, you would be able to use FancyDoodad to try to solve this thing, I don’t see the point.

The fact that something is cool doesn’t meant that it is :

  • Useful.
  • Ought to go into our codebase.
  • Solve our actual problem.

So broaden your horizons as much as possible, learn as much as you can ingest. But remember that every thing starts at negative hundred points, and coolness on its own doesn’t affect that math.

time to read 2 min | 381 words

imageWe run into an interesting design issue when building graph queries for RavenDB. The problem statement is fairly easy. Should a document be allowed to be bound to multiple aliases in the query results, or just one? However, without context, the problem statement in not meaningful, so let’s talk about what the actual problem is. Consider the graph on the right. We have three documents, Arava, Oscar and Phoebe and the following edges:

  • Arava Likes Oscar
  • Phoebe Likes Oscar

We now run the following query:

image

This query asks for a a dog that likes another dog that is liked by a dog. Another way to express the same sentiment (indeed, how RavenDB actually considers this type of query) is to write it as follows:

image

When processing the and expression, we require that documents that match to the same alias will be the same. Given the graph that we execute this on, what would you consider the right result?

Right now, we have the first option, in which a document can be match to multiple different alias in the same result, which would lead to the following results:

image

Note that in this case, the first and last entries match A and C to the same document.

The second option is to ensure that a document can only be bound to a single alias in the result, which would remove the duplicate results above and give us only:

image

Note that in either case, position matters, and the minimum number of results this query will generate is two, because we need to consider different starting points for the pattern match on the graph.

What do you think should we do in such a case? Are there reasons to want this behavior or that and should it be something that the user select?

time to read 3 min | 427 words

imageI used to be a consultant for a long while, and that meant that I worked on a lot of customer projects. That led to me seeing and acting is some really strange ways.

Sometimes you go into a codebase and you can’t really believe what you see there. I think that this is similar to how an archeologist feels, seeing just remnants of something and having to deduct what were the forces that drove the people who built it. In some cases, what looks like bad code is actually a reaction to a bad policy that people are trying to workaround.

I think that the strangest of these cases was when I was working for a customer that refused to let external consultants to use their internal source control system. Apparently, they had sensitive stuff there that they couldn’t isolate or something like that. They were using either Team Foundation Server or Visual Source Safe and I didn’t really want to use that source control anyway, so I didn’t push. I did worry about source control, so we had a shared directory being used as a Subversion repository, this was over a decade ago, mind.

So far, so good, and nothing really interesting to talk about. What killed me was that their operations team flat out refused to back up the Subversion folder. That folder was hosted on shared server that belong to the consulting company (but resided at the customer site), and they were unwilling to either back up a “foreign” computer or to provide us with a shared space to host Subversion that they would back up.

For a while, I would backup the Subversion repository every few days to my iPod, then take a copy of the entire source code history with me home. That wasn’t sustainable, and I was deeply concerned about the future of the project over time, so I also added a twist. As part of the build process, we packed the entire source directory of the codebase as an embedded resource into the binary. In this way, if the code was ever lost, which I considered to be a real possibility, I would have a way to recover it all back.

After we handed off the project, I believe they moved the source to their own repository, so we never actually needed that, but I slept a lot better knowing that I had a second string in my bow.

What is your craziest story?

time to read 4 min | 707 words

image

As I’m writing this, we have the following branches in the main repository of RavenDB. Looking at their history, we have:

Branch

Last Commit

Number of commits this year

v1.0

Feb 3, 2013

0

v2.0

Oct 14, 2016

0

v2.5

Oct 18, 2018

14

v3.0

Aug 14, 2018

10

v3.5

Oct 11, 2018

45

v4.0

Oct 18, 2018

2,270

v4.1

Oct 18, 2018

3,214

v4.2

Oct 18, 2018

95

The numbers are actually really interesting. Branches v1.0 and v2.0 are legacy and not longer supported. Branch v2.5 is also legacy, but we have a few customers with support contracts that are still using it so there are still minor bug fixes going on there occasionally. Most of the people on the 3.x line are using 3.5, which is now in maintenance mode, so you can see that there are very little work on the v3.0 branch and a bit of ongoing bug fixes for customers.

The bulk of the work is on the 4.x line. We released v4.0 in Feb of this year, and then switch to working on v4.1, which was released a couple of months ago. We actively started working on v4.2 this month. We are going to close down the v4.0 branch for new features at the end of this month and move it too to maintenance mode.

In practical terms, we very rarely need to do cross major version work but we do have a lot of prev, current, next parallel work. In other words, the situation right now is that a bug fix has to go to at least v4.1 and v4.2 and usually to v4.0 as well. We have been dealing with several different ways to handle this task.

For v4.0 and v4.1 work, which went on in parallel for most of this year, we had the developers submit two pull requests for their changes, one for v4.0 and one for v4.1. This increased the amount of work each change took, but the cost was usually just a few minutes at PR submission time, since we could usually get cherry pick the relevant changes and be done with it. The reason we did it this way is to avoid big merges as we move work between actively worked on branches. That would require having someone dedicated just to handle that, and it was easier to do it in line, rather than in a big bang fashion.

For the v4.2 branch, we are experimenting with something else. Most of the work is going on in the v4.1 branch at this point, mostly minor features and updates, while the v4.2 branch is experimenting with much larger scope of changes. It doesn’t make sense to ask the team to send three PRs, and we are going to close down v4.0 this month anyway. What we are currently doing is designating a person that is in charge of merging the v4.1 changes to v4.2 on a regular basis. So far, we are still pretty close and there hasn’t been a big amount of changes. Depending on how it goes, we’ll keep doing the dual PR once v4.0 is retired from active status or see if the merges can keep going on.

For feature branches, the situation is more straightforward. We typically ask the owner of the feature to rebase on a regular basis on top of whatever the baseline is, and the responsibility to do that is on them.

A long feature branch for us can last up to a month or so, but we had a few that took 3 months when it was a big change. I tend to really dislike those and we are trying to get them to a much shorter timeframes. Most of the work doesn’t happen in a feature branch, we’ll accept partial solutions (if they don’t impact anything else) and we tend to collaborate a lot more closely on code that is already merged rather than in independent branches.

time to read 2 min | 323 words

Regardless of the operating system you use, you are going to get roughly the same services from each of them. In particular, process and memory isolation, managing the hardware, etc. It can sometimes be really interesting to see the difference between the operating systems approach to solving the same problem. Case in point, how both Windows and Linux manage memory. Both of them run on the same hardware and do roughly the same thing. But they have very different styles, this end up having profound implications on the application using them.

Consider what appears to be a very simple question, what stuff do I have in my RAM? Linux keeps track of Resident Set Size on a per mapping basis, which means that we are able to figure out how much of a mmap file is actually in memory. Further more, we can figure out how of the mmap data is clean, which means that it is easily discardable and how much is dirty and needs to be written to disk. Linux exposes this information via the /proc/[pid]/smaps

On the other hand, Windows doesn’t seem to bother to do this tracking. You can get this information, but you need to ask it for each page individually. This means that it isn’t feasible to check what percentage of the system memory is clean (mmap pages that hasn’t been modified and can be cheaply discarded). Windows expose this via the QueryWorkingSetEx method.

As a result, we have to be more conservative on Windows when the system reports high memory usage. We know that our usage pattern means that high amount of memory in use (coming from mmap clean pages) is fine. It is a small detail, but it has caused us to have to jump through several hurdles when we are running under load. I guess that Windows doesn’t need this information, so it isn’t exposed, while on Linux it seems to be used by plenty of callers.

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 15 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}