Azure DocumentDB
On Friday, Microsoft came up with Azure DocumentDB. You might say that I have a small interest in such things, so I headed over there to see what I can learn about this project.
Aside from being somewhat annoyed with the name, this seems to be a very different animal from RavenDB, and something that was built to serve a different niche. One of the things that we put first with RavenDB is ease of use, development and deployment for business applications. The ADB design appears to be built around a different goal, around very big datasets.
Nitpicker corner: Yes, I know this is a preview, and I know that they are going to be changes. And I repeat, I have no knowledge about this project beyond the documentation and several hours of playing with it.
That said, I do have a fair bit of experience in this area. So I feel that I can speak with confidence about the topic.
ADB is supposed to be an highly scalable system that store documents. So far, so good, I can certainly understand that need. But it has made drastically different design choices, some of which I feel very strongly about. I'll try to explore the issues that I have issues with, and contrast that with what you can do with RavenDB.
This post has two parts, the first talks about conceptual issues. The second talk about the currently published limits, and their implications for general use for ADB.
TLDR;
- No sorting option, or a good paging story
- SQL Injection, without any other alternative
- Hard to deploy and to keep current with your codebase
- Poor development story & no testing story
- Poor client API
- Lots of table scans
- Limited queries and few optimization options
- Single document transactions (from the client)
- No cross collection transactions at all
- Very small document sizes allowed
Also see the “What is this for?” section below.
For a document database platform that doesn’t have any of those issues, and run in Azure, see RavenHQ on Azure.
Transactions – ADB say that it has transactions, and for a very limited meaning of the word, I believe it means it. Transactions in ADB means a single document only can be saved with a guarantee it will either be saved or not. That is great, in the sense that at least you won’t have data corruption, but that isn’t really something that mean much. Even MongoDB can satisfy that bar.
Oh, sure, you can get actual transactions if you write JS code that run as a “stored procedure” inside ADB. This means that you can send data to the server and have your JS Stored Procedure make multiple operations in a single transaction. Which is just slightly better (although see my comments on those stored procedures later), but that is still limited to only operations inside the same collections.
A trivial example for transactions in a document database would be to add a new comment, and update the comment count. You cannot do that in ADB. Not in a single transaction. I don’t know about you, but most of the interesting use cases happen when you are working with multiple document types. Sure, you can put all your documents inside the same collection, but have fun trying to work with that in the long term.
In contrast, RavenDB fully support actual transactions that can span multiple documents (even on different collections, which I would never believe would be an accomplishment). RavenDB can even support DTC and transactions that spans multiple interactions with the server. You know, the kind of transactions you actually want to use. For more, see the documentation on RavenDB transactions.
Management – it honestly feels like someone missed all the points that made people want to ditch SQL in the first place. ADB has the concepts of triggers, user defined functions (more on that travesty later, when I discuss queries) and stored procedures. You can define them in JS, and you create something that looks like this:
Let me count the ways that this is going to cause problems for you.
- Business logic in the database, because we haven’t learned anything about that in the past.
- Code that you cannot run or test independently. Just debugging something like that is going to be hard.
- No way to actually manage deployment or make sure that this code is in sync with the rest of your codebase.
- Didn’t we already learn that triggers are a source for a lot of pain? Are they really necessary for you to do things?
Yes, you have a DB that is schema less, but those kind of things are actually important. They define what you can do with the database, and not having a good way to move those around, and most importantly, not having a way to tie them to the source control system you are using is going to be a giant PITA.
Sorry, that isn’t actually something that you can delay doing for later. You need a good development story, and as I see it, the entire development story all around here is just going to be hard. You would have to manually schlep things around between development and production. And that isn’t just about the SP or UDFs. There are a lot of settings that you’re going to have to deal with. For example, the configuration per collection, which you’ll want to make sure is the same (otherwise you get some very subtle and hard to understand bugs).
For that matter, there doesn’t seem to be a development story. You are probably expected to just run another ADB instance on Azure and use that. This means a lot of latency in development, and that also means that you can’t have separate databases per developer, which is a standard practice. This means having to write a lot of code just to manage those things, and you are right back again at the good old days of “who didn’t update the schema script” and failed deployments.
In contrast, RavenDB make is very easy to handle your indexes & transformers in your code and deploying them as a single step. That also means that they are versioned in the same place as your code, so you don’t have to worry about moving between dev & prod. We spent a lot of time thinking and working around this specific area, because this is a common pain point in relational databases, and we weren’t willing to accept that being the case in our database. For more information, please see the documentation about index management in RavenDB.
Indexing – there are several things here that bother me. By default, everything is indexed, and in the same transaction. This is a great decision, for a demo system. But in a real world system, the overhead of indexing everything is prohibitive, especially in a high write system. So ADB is allowing to specify the paths that you will include or exclude from indexing, as well as whatever indexing should be within the same transaction or lazy.
The problem with that is that those are per collection settings and there doesn’t appear to be any way to modify them after the fact. So you start running your system in production, realize that the cost of indexing is high, so you need to change the indexing strategy for a collection. The only way to do that is to create a new collection, with a new indexing strategy, move all the data there, then delete the old one. For even more fun, consider the case where you have a production and development environments. In production, you have a different indexing strategy then in development (where the ‘index everything’ mode is still on). That means that when you push things to production, your system will fail silently, because you won’t be indexing the fields you though were indexed.
This need re-iteration, the way this currently work, you start running with the default indexing option, which is expensive. As long as you don’t have any performance requirements (for example, during development), that is just fine. But when you actually have a lot of data there, or have a lot of writes, that is when you’ll figure out that those things need to be changed. At that point, you are are pretty much screwed, because you need to pull all the data out, create a new collection with the new indexing options, and write it all back. That is a horrible experience, especially because you’ll likely need to do that under pressure with users breathing down your necks and management complaining about the performance.
For that matter, indexing in general scares me. Again, I don’t actually have any knowledge of the internal operations, but there are a lot of stuff there that just doesn’t make sense. It looks like the precision of the indexes used are up to 3 characters (by default) per value. I’m guessing that this is done to reduce the amount of space used by the indexing, at least that is what the docs says. The problem is that when you do that, you do a lookup by the first 3 characters, then you have to do a flat search over all the other values. That is going to be causing problems.
It is also indicated that you cannot do any range searches except on numeric values. Which has interesting implications if you want to do searches on something like a date range, or time spans, an incredibly common operation.
In contrast, RavenDB indexes are always using the full value, so you are getting an O(logN) search behavior, and not a fallback to O(N) behavior. Range searches are possible on any value, numeric, date time, time span, string, etc. For more information, see the RavenDB documentation about searching with RavenDB.
Queries – Speaking of problems. Let me talk for a moment on ADB SQL. It looks nice on the surface, it is certainly would be familiar to most people. It is also contain a lot of hidden traps.
For example, the docs talk about being able to do joins, but you are only actually able to do “joins” into the sub documents, not into other collections, or even documents in the same collection. A query such as:
SELECT c.Name as CustomerName, o.Total, o.Date FROM Orders o JOIN Customers c ON c.Id = o.CustomerId
Can’t be executed on ADB. So the whole notion of “joins” is actually limited to what you can do in a single document and the sub documents it contains. That make it very limited.
The options for filtering (where clause) is also interesting. Mostly because of the wide range they allow. It is very easy to create queries that cannot be computed using indexes. That means that your query is now running table scans. Lots & lots of table scans. Sure, you don’t have tables, but O(N) is still O(N), and when N is large, as it is apparently the expected case here, you are going to be pretty much dead in the water.
Another thing that I can’t wrap my head around is the queries shown. There is no way to pass parameters to the query. None. This appears to be the case because 30+ years of working with SQL has shown that there is absolutely no issue with putting user’s input directly into the query. And since complex queries require you to use the raw ADB SQL, you are pretty much have guaranteed that you’ll have SQL Injection attacks.
Sure, you might no get caught by Little Bobby Tables (you can’t modify data via SQL), but you are still exposed and can leak important data. This query works just fine, and will return all products:
SELECT * FROM Products p WHERE p.Name = "testing" OR 1 = 1 -- "
I’ll assume that you understand how I got there. This is a brand new database engine, but ADB is bringing very old issues back into the future. Not only that, we don’t have anyway around that. I guess you are going to have to write your on parameter scrubbing code, and make sure to use it everywhere.
In general, queries are limited. Severely limited, actually. Take a look at the following query:
SELECT * FROM Products p WHERE p.Type = "Beer" AND p.Maker = "Guinness" AND p.Discontinued = false AND p.Price > 10 AND p.Price < 100
You can’t run it in ADB. It is too complex to run. Note that this is about as trivial a query as you can get, in pretty much any reasonable business system.
Continuing on with the problems for business apps theme, there doesn’t appear to any good way to do things like paging. When you issue a query, you can specify the number of items to take and you can repeat a query by passing a continuation. But that doesn’t really help when you need to actually page with the user. So you show the data to the user, then want to go to the next page… you have to pass the continuation token all the way around, and hope that it will remain valid for the duration. For that matter, the current client API does paging at the server level, but it will fetch all the results for a query, even if it take it hours to do so.
There is no way to actually get the total number of items that match the query. So you can’t show the user something like: “You have 250 new emails”, nor can you show them “Page 1 … 50”.
Another troubling omission is the total lack of anything that would allow you to actually query your documents in a particular order. If I want to get the latest orders in descending order (or in fact, in any well defined order), I am out of luck. There is no way of doing that. This is a huge deal, because this isn’t just something that you can try papering over. This is a core functionality that you need in pretty much any application. And it is just not there. There is some indication that this is being worked on, but I’m surprised that this isn’t here already. Distributed sorting is a non trivial problem, of course, so I’ll reserve further judgment until I see what they have.
ADB’s queries are highly limited, so I expect a workaround for that is going to be to push functionality into the UDF. Note that UDF don’t have access to any context, so it can’t load additional documents. What it can do it utterly destroy any chance you’ll ever have for optimizing a query. The moment that a UDF is involved, you don’t really have a choice about how to execute a query, you pretty much have to go to a table scan. Maybe filtering some stuff based on the other filters in the query, but in many cases, that means that you’ll have to run your UDF over millions of records. Because UDFs are permitted to perform non pure operations (like the current time), you can’t even cache its values, or do anything smart around that. You’ll always have to execute the UDF, regardless of the amount of data you have to go through. I don’t expect that to perform very well.
In contrast, RavenDB was explicitly designed to give you both flexibility and performance in queries. There are no table scans in RavenDB, and complex queries are expected, encouraged and are handled properly. Queries across multiple documents (and in other collections) are possible, and quite easy to do. Common operations, like paging or sorting are part of the core functionality, and are both very easy to use and come with no additional costs. Complex things like full text search, spatial queries, facets and many more are right there for you to use. For more information, see the RavenDB documentation about querying in RavenDB, spatial searches in RavenDB and how RavenDB actually index the data to allow complex operations.
Data types – ADB data types are the ones defined in the JSON spec. In particular, it doesn’t have native support for date times. The ADB documentation suggest that you’ll do custom serialization to handle that. Rendering things like asking: “Give me all the orders for this customer for 2014” very hard, leaving aside the issues of querying for orders in a particular month, which is not possible either, since you can only do range searches on numeric data. Dates, in particular, are a very complex topic, and not actually handling this in the database is going to open you up for a lot of issues down the road. And dates are kinda important type to have.
In contrast, RavenDB handles complex (including user defined) types in a well defined manner. And has full support for dates, operations on dates, etc. It seems silly to mention, to be fair, because it seems so basic to have that. For more information, you can read the documentation about dates in RavenDB.
Aggregation – this one is simple, you don’t have any. That means that you cannot get the total number of unread emails, or the total sum of orders per customer, or the maximum order per this month . This whole functionality just isn’t there.
In contrast, RavenDB has explicit support for counting the number of results for a query as well as map/reduce indexes. Those give you powerful aggregation framework, which execute the work in the background. When you query, you get the pre-computed results very quickly, without having to do any work at query time. For more information, you can read about Map/Reduce in RavenDB and dynamic aggregation queries.
Set operations – another easy one, it is just not there. You can do some operations in a stored procedure, but you have 5 seconds to run, and that is it. If you need to do something like: Split FullName to FirstName and LastName, get ready to write a lot of code, and wait for a long time for this to complete. For that matter, something as simple as “delete all inactive users” is very hard to do as well.
In contrast, RavenDB has explicit support for set based updates and deletes. You can specify a query that match a set of results that would either be deleted or patched using a JS script. For more operations, read the documentations about Set Based Operations.
Client API – this is still a preview, so that is somewhat unfair, but the client API is very primitive. Basically, it is a very thin wrapper around the REST API, and it does a poor job at that. The REST API support paging, but the C# client API does not, for example. There is no concept of unit of work, change tracking, client side behavior or anything at all that would actually make this work nicely. There is also an interesting design decision to go async for all operations except queries.
With queries, you actually issue an async REST call, but you are going to be waiting on that query synchronously. This is probably because of the IQueryable interface and its assumption that the query is sync. But that is a very bad thing to do in terms of mixing sync and async work. It is easy to get into problems such as deadlocks, self lock and just plain weirdness.
In contrast, RavenDB has a carefully designed client APIs (for .NET, JVM, etc), which fully expose the power of RavenDB. They have been designed to be intuitive, easy to use and guide you into the pit of success, RavenDB also have separate sync and async API, including fully async queries. For more information, read the documentation about the client API.
Self links – when issuing any operation whatsoever to the database, you have to use something call the object link, or self link. For example, in my test database, the Products collection link is: dbs/frETAA==/colls/frETANSmEAA=/
You have to use links like that whenever you make any operation what so ever. For fun, those are going to be unique per database, so creating a Products collection in another database would result in a different collection link. That means that I can’t just store them in configuration. So you’ll probably have to read them from the database every time you need to use them (maybe with some caching?). This is just silly. This is making it very hard to look at what is going on and see what the system is doing (for example, by watching what is going on in Fiddler).
In contrast, RavenDB applies human readable names whenever possible. For more information, see the documentation about the efforts to make sure that everything in RavenDB in human readable and easily debuggable. One such place is the id generation strategy.
Development and testing – in this day and age, people are connected to the internet through most of their day to day life. That doesn’t mean that they are always connected, or that you can actually rely on the network, or that the latency is acceptable. There is no current development story for ADB. No way to run your own database and develop while you are offline (on the train or at 30,000 feet in the air). That means that every call to ADB has to go over the internet, and that means, in turn, that there is no local development story at all. It means a lot more waiting from the point of view of the developer (also see next point), it means that there is just no testing story.
If you want to run code to test your ADB usage, you have to setup (and pay) a whole new ADB instance, have to make sure that it is setup exactly the same way as your production instance, and run it against that. It means that test not only have to go outside your process, but across the internet to a remote server. This pretty much kills the notion of fast tests.
In contrast, RavenDB has an excellent development and testing story. You don’t pay for development or CI instances, and you can run tests against RavenDB using an in memory mode embedded inside your process. This has been heavily optimize to allow fast running tests. We are developers, and we care to make other developers’ life easy. It shows. For more information, see the documentation about unit testing RavenDB.
Joins are for your code – because ADB doesn’t actually support joins beyond the document scope, or any other option like that, it means that if you want to do something trivial, like show a customer a list of their orders, you are actually going to have to do the join in your own code, not in the database. In fact, let us take a silly scenario, let us say that we want to show a list of new employees as well as their managers, so we can have a chat with them about how they are settling in.
If we were using SQL, we would be using something like this:
SELECT emp.Id as EmpId, emp.Name as EmpName, mngr.Id as ManagerId, mngr.Name as ManagerName FROM Employees emp JOIN Managers mngr where emp.ManagerId = mngr.Id WHERE emp.JoinedAt > '2014-06-01'
That is pretty easy, right? How do you do something like that in ADB? Well, you start with the first query:
SELECT emp.Id as EmpId, emp.Name as EmpName, emp.ManagerId as ManagerId FROM Employees emp WHERE emp.JoinedAt > '2014-06-01'
And then, for each of the returned managers’ ids, we have to issue a separate query (ADB doesn’t have support for IN). This pattern of usage is called SELECT N+1, and it is a very well known anti pattern, even leaving aside the fact that you have to manually do the join in your own code, with all that this implies. This sort of operations will effectively kill the performance of any application, because you are very chatty with the database.
In contrast, RavenDB contains several ways to load related items. From including a related document to projecting it via a transformer, you can very easily and efficiently get all the data you need, in a single query to RavenDB. In fact, RavenDB applies a Safe By Default approach and limit the number of times you can call the server (configurable) to prevent just this case. We’ll error if you go over the budget of remote calls you are allowed to make. This gives you an early chance to catch performance problems. For more information, see the documentation about includes, transformers and the Safe By Default approach practiced by RavenDB.
Limits - reading the limits for ADB makes for some head scratching. Yes, I know that we are talking about the preview mode only. I’m aware that you can ask to increase those limits. Nevertheless, those limits likely reflect real trade offs made in the system. So increasing those limits for a particular use case means that you’ll have to pay the price for that elsewhere.
For example, let us take this blog post as an example. It is over 22KB in size. But I can’t store this blog post in ADB. That is because documents are limited to 16KB in size. This is utterly ridiculous. I just checked a few of our databases, an common size for documents is 4 – 8 KB, this is true. But larger documents appear all the time. Even if you exclude blog posts as BLOB of text, we have order documents that have with multiple order lines that are easily past that size. In our users, we see every document size possible, from hundreds of KB to several MB.
I reached out to Codealike, one of our customers, who were also featured in one of Azure’s case studies, to hear from them what their situation was. Out of 1.6 million documents in one of their databases, about 90% are in the 500Kb range.
I’m assuming that a large part of this limitation is the fact that by default, everything is indexed. You can’t index everything and have large documents and have reasonable performance. So this limit was introduced. I’m also assuming that there are other issues here (to be able to fit into pages? low level technical stuff?). Regardless, this is just utterly ridiculous low limit. The problem is that even raising this limit by x5 or x10, that is still not enough. And I’m assuming that they didn’t chose this limit out of thin air, that there is a technical reason for it.
Other issues is the number of stored procedure and UDF that you have available. You get 5 of each, and that is it. So you don’t get to actually express anything complex there. You also get to use only a single UDF per query, and to use a maximum of 3 AND / OR clauses in a query. I’m assuming that the reasoning here is that the more clauses you have, the more complex it is to run the query, especially in a distributed environment. So they put a hard limit on that.
Those limits together, along with not supporting sorting basically render ADB into an interesting curiosity, but not a real contender for a generally applicable database.
What is this for?
After going over the documentation, there is one thing that I couldn’t find. What is the primary use case for ADB?
It looks more like a solution in search of a problem than the other way around. It appears that this is used by several MS systems to store 100s of TB of data, and process millions of queries. Sheer data size isn’t really interesting, we have customers that have multiple TB data. And millions of queries per day isn’t really something to brag about (10 million queries per day translate to about 115 queries per second, or about 20 – 30 queries per second per node).
What interests me is what sort of data do you put there? The small size limitation make it pretty much unsuitable for storing actual complex documents. You have to always be aware of the size you are storing, and that put a serious crimp in how you can work with this. The limited queries and the inability to sort also lead me to believe that this is a very purpose built tool.
OneNote’s server side is apparently one such use case, but from the look of things, I would expect that this is the other way around. That ADB is actually the backend of OneNote that Microsoft has decided to make public (like Dynamo’s in Amazon’s case).
Some of those limitations are probably going to be alleviated by using additional Microsoft tools. So the new Search Server (presumably that one has complex searching & sorting available) would allow you to do some proper queries, and HDInsight might be used for doing aggregation.
You aren’t going to be able to get the “show me the count of unread emails for this user” from Hadoop, not when the data is constantly changing. And using a secondary search server will introduce high latencies for the indexing. That is leaving aside the additional operational complexity of having to manage multiple systems (and the communication between them) just to get things done.
Here are a few things that would be hard to build in ADB, as it stands today:
- This blog – the posts are too big, can’t sort posts by date, can’t do “complex” queries (tag & date & published & not deleted)
- Logging – I actually thought that this would be a great use case, but we actually need to show logs by date. As well as be able to search using multiple fields (more than 3) or do contains queries.
- Orders system – important orders with a lot of line items will be rejected because of the size limitation.
In fact, I don’t know what would work there. What kind of data are you putting there? This isn’t good for bulk data work, because the ingest rate is really small (~500 writes / second? The debug version RavenDB does 2,500 writes per sec that on my dev laptop without even using the bulk insert API) and there isn’t a good way to work with large amount of data at once. It isn’t good for business applications, for the reasons outlined above.
I guess that if you patched this and the search server and Hadoop together you would get something that might be able to serve. But I think that the complexity involved is going to be very high, and I just don’t see where this would be a great solution.
In short, what is the problem that this is trying to solve? What application would be a perfect fit for this?
With RavenDB, the answer is simple, it is a general purpose database focused on OTLP applications. Until you have an answer, you can use RavenDB on Azure today using RavenHQ on Azure.
Comments
I value your opinion, but adding a "in contrast, RavenDB does this much better because xyz" after every critique makes this read like an advertorial instead of an objective review.
I would love if you could have a look at BrightstarDB.
When I read the announcement post for ADB this sounded like a viable (although inferior) database. That's because they did not mention all those crippling limitations that I read about here the first time.
In fact I was laughing reading through this post. ADB is astonishingly bad. Who is it for? Who would voluntarily build something on it?
So many design blunders that are not just first version issues. This seems to be designed by junior people lacking experience with real-world database-based apps. They don't know what customers need.
In fact I wonder what the difference is compared to Azure Tables. This should be a feature of Tables. Tables should support JSON, indexing and queries. Both products are document databases in the sense that they emphasize working on single rows at a time.
I credit Microsoft for one thing: They understand developers and they know how to have their products accepted by them. They create a halo of coolness around them. People will start to play with this thing and want to use it. And junior devs probably do not see how severe the problems are.
@Bart - how can a person who has created a competitive product, write a comparison piece without it trying to sell like a advertisement dripping with schlock? The only way you can compare your own product to another is to highlight -the -differences- and provide proof of your own product's features.
Oren has picked a feature, discussed it, then compared that to what RavenDB is offering and giving you a link to assert his claim.
If anything, you should rip into his discussion on each feature ... not that he is trying to discuss how RavenDb compares to ADB, in his -own- opinion. If his links to his RavenDb features is poor - raise that and say so and say why.
Proper discussion should be about data points, where available and applicable.
He started out with his caveats - then let it rip. But he hasn't blinding gone down and said ADB is crap, buy RavenDB because i said so (with no reason why).
"We’ll error if you go over the budget of remote calls you are allowed to make."
I just found this out on the weekend, I had a scenario where I wanted to update up to 50 documents, started getting errors, re-visited what I was doing, ended up changing the code to Load using an array of Ids, problem solved, ended up being much quicker too!
I would love if you could have a look at BrightstarDB.
I was wondering if maybe the collections are supposed to hold many different document types? In that case you could, theoretically, perform joins between different document types.
I was looking at this database as well because I have a very very large dataset I need to query. I was looking for a cheap way to do it, and have been using Google BigQuery which I think is what this is suppose to be competing with in some wierd way. Azure doesn't have a good BigData solution, unless you want to spend an arm and a leg using Hadoop.
Robert, from what I've understood, joins are only possible inside a single document. Not document type, but document 'instance'.
Nice article but way too premature. The version of Azure DocumentDB is early preview. This is by far not a completed 1.0 product so much of this discussion seems to be just FUD designed to keep people from giving DocumentDB a try. I'll wait and see what makes it into the final GA release since that is what I would actually use in production.
Thanks for the good overview Ayende... I was looking for the real scoop about Azure DocumentDB and found it here.
I agree with joe, its still a preview. One thing about pagination, I read once in this blog raven cant do pagination in sharded environment
I've read some ADB docs and I think Ayende has misunderstood the purpose of join - its purpose is not to perform SQL table joins, it is there for flattening documents without requiring nested loops in code. This feature doesn't deserve bashing it got here. Another example: automatic indexing has been criticized for poor performance without any real performance data - I mean, maybe something that's difficult for RavenDB is not difficult at all if you do it differently? Maybe they'll come up with some clever index auto-tuning that will handle 99% of use cases nicely? And the attempt to ridicule indexing in same transaction (versus 'eventually consistent' in Raven) is just a joke imho. Also, the article contains just 1 link to ADB website while RavenDB links and praises are sprinkled very generously everywhere - is it really a review of ADB or just a marketing trick?
Ayende, this seems a bit premature and reactionary. I understand the passion. And it could be that MS will take your critique to heart and improve on the product. Who knows.
Somone, A join on the same document is essentially not a join. Considering the size of the actual document is 16KB, just sending them to the client would be much cheaper. Remember, the server has to process the join as well, somehow, and it is probably using nested loops to do so.
About auto indexing, well, there is a cost for that, and while I'm sure that they spent a lot of time on that, indexing is still at least O(N) operation. But don't believe me, check the docs about how many ways you have to reduce the indexing load (including / excluding paths), setting lazy indexing, etc. That indicate the very real costs that you have for indexing.
ElasticSearch, AFAIK, has automatic indexing feature that works. You can customize that or disable completely, but it serves its purpose very well. And I don't care if server uses nested loop for join or not - the purpose is to remove the loop from query.
Someone, Elasticsearch is not a database, it's a search server. As such, it does assume by default any piece of data you put into it needs to be searchable. Also, it isn't expected to provide functionalities that document databases are expected to provide (like cross-document search, map/reduce, referencing, etc).
Additionally, Elasticsearch has a unique design of index replicas and shards which makes it very easy to scale out. As someone with lots of experience with Elasticsearch I can tell you this plays a crucial part in the trade-offs that automatic indexing plays. As you index more data per document you will find yourself scaling out more quickly.
Oh please, for all cases we're discussing here it is a document database. It can store, update, search and retrieve JSON documents - this is what you expect from a database. No map reduce - fine. People can apparently live without that (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations.html). Everything depends on use case, even if you insist on calling it a 'search server' ElasticSearch is still a dangerous competition for 'document databases' (especially these based on Lucene)
I am a satisfied RavenDB customer as well as an Azure customer, although I have not used ADB specifically. It is completely believable that RavenDB is better than ADB in all of the areas described above, but I suspect, based on past experience, that ADB will be significantly better than RavenDB in one critical area: documentation.
For example, Ayende mentions querying by date range several times throughout this post. He even references RavenDB's implementation of DateTime (http://ravendb.net/kb/61/working-with-date-and-time-in-ravendb). I challenge you, however, to figure out in a reasonable amount of time how to actually perform something so basic as a lower-bounded date range search (say, "date > 1/1/2010") in RavenDB's built-in query editor. Go ahead, I'll wait. ...
Now, some of you will say "RavenDB uses Lucene, so just go check the Lucene documentation." Ok, let's assume for a moment that I'm aware that RavenDB uses Lucene (which may or may not be true), I challenge you to figure out how to perform that date range search in Lucene! Go ahead, I'll wait ...
As far as I know, there is no documentation of this anywhere, for either RavenDB or Lucene. The last time I figured it out it took me so long that I wrote up some internal documentation so I wouldn't have to figure it out all over again.
True, ADB lacks any sort of "developer story." But all too often the developer story for RavenDB has been one of sinking massive time into scouring blogs and/or StackOverflow and/or emailing Ayende to figure out how to do something that should otherwise be relatively simple.
It's usually the case that RavenDB does exactly what you want it to do, but many times it's very difficult to figure out how to do it. On the other hand, if RavenDB had polished, up-to-date, useful documentation, it would be absolutely no contest whatsoever.
@Someone, given that querying is extremely limited (including a hard limit on the number of clauses), it's pretty much useless. With that in mind, how is DocumentDB different from Redis? Redis also does a lot more than just store plain documents.
Of course ayende is right. I think documentDB in its current form is useless for production. There have to be a lot of changes and some are in the core.
On the bright sight I see MS investing in and acknowledging document databases so thats a good thing. Guthrie wrote 'for modern web applications' so that's nice. Don't forget that MS (not only them) operates sometimes like this. Release a half baked product and try to get people invest time in the product and get them enthusiastic. Its hard to turn to another product then.
RavenDB on Azure or Amazone through RavenHQ is very very expensive. not an option for us. I hope that will change when there is more competition.
@someone else The documentation of MS is hard to beat. They are very good at it, they have of course the resources for it but it nicely done. It is also not easy to create good documentation.
MSSQL is pretty good documented and for every problem there is a google answer somewhere. RavenDB does not have that because it is a young product. New things are always causing problems thats why a lot of 'us' are sticking with MSSQL and don't evolve :-)
a new product? hmmm they themselves claim to be a 2nd generation product in its 3rd version already. By Ayende's own admission he has been at this for 6 years already. a new product, my ass!
FYI, I came here from google (searching ayende) and it looks like your sitemap is pointing to /blog/tags/blog.
This locked to US Central makes it not really a viable option at the moment.
You pointed out indexing in same transaction are bad for writes, its increase write time. But what ravendb does is updating indexes in separate threads that results stale reads that throw away ravendb consistency in read mode(assume you cant always use load) because as many as indexes you create you will end up read more stale data(someone should update those indexes and it gets time to do). Its all trade offs, and I think a smarter solution would be put indexes in same transactions and allow stale reads from replicas
Im sorry,better solution I mean, for increasing write performance( having indexes updates in same transaction) is to allow write to passive replicas whenever user wants
Someone Else, The date is explicitly documented here: http://ravendb.net/docs/2.5/client-api/advanced/full-query-syntax
See the ISO Date Parsing section.
I'll be the first to admit that we've had issues with the documentation. That is why for the 2.5 release, we started tracking documentation as issues in our systems, and we put 2 - 4 people just on that for the 2.5 release, and we are continuing to do just that for the 3.0 release. In addition to that, we are also compiling a book that will take you through everything that RavenDB does.
Bob, Did you check the docs for RavenDB? I'm pretty happy with their current state. We cover pretty much everything that you need to know to get started, and we also have videos, tutorials, and additional guidance.
Chad, It is currently in a preview release to make sure that there aren't any kinks that you need to work through. Additional zones are coming.
Amin, If indexes are processed in the same transaction, you can't replicate to another node until you run through all the indexes in the primary node.
And RavenDB allows you to chose what sort of consistency you want. Do you want to get the current state, potentially stale? Do you want to be be consistent, but maybe wait?
When I go to google and search ayende then ayende.com is obviously the first result, with a couple of sub-links right below it.
The first link, back when there was an ayende.com it went there. I used to avoid the first one and go to the second, which was the actual blogbut now goes to blog/tags/blog.
Is that because of your sitemap or something google is doing?
Ayende, the ISO Date Parsing section you pointed me does not describe how to do unbounded date queries. It describes how to do date1 < date < date2, but not date1 < date or date < date2.
I suppose you could infer the syntax from the unbounded int query where it says Query = "Age_Range:{20 TO NULL}". This is actually a good example of what I was talking about in my original post: in many cases you have to assimilate information from multiple locations in other to figure out how to make RavenDB work.
Someone Else, The obvious solution, assuming you don't know how to do range queries would be to put 9999 as the year, and you are done.
My assumption is that the Preview limits (and the 16Kb is a bit of a blocker) are to initially work out what limits Free, Basic, Standard and Premium should have, as per other Azure services. They've only announced Standard, and it's limits will no doubt change too.
Liam Cavanagh of Microsoft has indicated on Scott Guthrie's blog that a lower priced option will arrive.
John Macintyre of Microsoft has stated that Order By is coming soon, and an emulator. Spacial query support is a maybe.
I welcome the choice DocumentDb brings. I assume Attachments would have to be used for quite a few solutions. Or mixing with reporting via SQL Database and live counts via Redis.
Ayende, That is indeed obvious. In fact it is too obvious, because I then have to study ISO 8601 to verify that 9999 is an allowable year value for ISO Dates, adding yet another piece of information I have to assimilate. And then I am done.
This reminds me of when Entity Framework came out and lots of people were saying it was so horrible compared to nHibernate. Fast forward and now EF has advanced a lot and usage has tilted towards EF. It helps that Microsoft has deep pockets and was able to sponsor everything that needed to be improved.
AzureDB has to catch up, but it will improve more quickly because they have more resources. Ayende, maybe you should join Microsoft. :)
@Joe - people were talking about how horrible when it came out, fast forward, its still horrible, and getting pretty much a complete rewrite for vnext...
Joe: Doing that would just result in his projects being shut down because of politics.
Imho only a small focused and agile team can make something like ravendb.
So I think we're making the assumption this is all based on elasticsearch? Its been confirmed Azure Search is, but I haven't found anywhere that says DocumentDB is. It would make sense though.
I think comparing to RavenDB is a little nonsensical, although I understand why the question needed to be answered thoroughly.
I think this is a basic offering. Something that you could use with a host of other technologies to fill in the gaps... This is for that person who wanted to use table storage but then found out he/she would have to split there document blob across all the columns!
At least for the time being anyway.
The real story here though is that it is a Microsoft offering, with Microsoft support and managed by, you guessed it, Microsoft. You'll be able to create this in the Azure portal quickly and know that Microsoft have got your back in terms of scalability (indexing everything aside) and replication... Well that or you have someone to blame.
RavenDB on the other hand does everything and this post really goes someway to demonstrating that, but it's likely that you're going to have to set it up and maintain it.
Someone completely different, The experience for using RavenDB in Azure is pretty much the same as using ADB. You go through the portal, add the resource, start using it, done.
I was very excited about Azure DocumentDB. I'd been really wanting an easy document database on Azure (RavenHQ on Azure is too expensive). However DocumentDB is terrible. I don't know if it's just too early in its development or what but it doesn't really work for anything, just like Ayende said.
Didn't realise that. Will check it out!
I do however really really like the Index Everything idea. It would be nice if that could be made to work.
Ryan, You can do that in RavenDB. See: http://ayende.com/blog/153729/lazys-man-comprehensive-search-with-ravendb
Let me be the odd one out that says having high latency in development can be a good thing. I have seen so many systems that have been developed where the web/app servers are on the same box as the database engine and barely manage to perform. As soon as they're put on separate boxes (heaven-forbid with high latency between them) the apps are unusable.
Many years ago, we used to make our web devs connect to the DB via a dial-up modem for exactly this reason. We got far better applications back as they were critically aware of all the calls that they made to the DB, and avoided pointless calls.
Just last week I was working on a system that looked up config, etc. on every keystroke that a user made. It also asked the DB server what its name was about 60 times per second. Chances are it wasn't changing...
Greg, There are a LOT better options for handling this than just adding latency. In RavenDB, for example, we have governors in place, that alert you when you are doing bad things like that.
Can't wait for that RavenDB book coming out.
Comment preview