Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 400 words

Hadi’s had an interesting Tweet:

This sounds reasonable on the surface, but it runs into hard issues with the vast differences between any amount of money vs. free. There have been quite a few studies on this. A good reading on the subject can be found here. Take a note of the Amazon experience. Shipping that is free increases sales. Shipping that is 0.2$ does not increase sales. Note that from a practical standpoint, there is no real difference between the prices, but it matters, quite a lot.

Leaving aside the psychology of free, there are also other issues. Let’s say that there is a tool that can save someone on my team 10 minutes a day, it is priced at 1$ / year. By any measure you care to name, that is more than worth it.

But the process of actually getting this purchased is likely to be more expensive than the actual cost of the tool. In my case, the process is “go talk to Oren”. In other environments, that may involve “talk to your boss, that will submit it to accounts payable, which will pay it in 60 days”.

And at that point, charging 1$ just doesn’t matter. You could charge 50$, and the “cost” for the person making the purchase would be pretty much the same. Note that this is for corporate environment, the situation is (slightly) different for sales directly to consumers, but not significantly so. Free stills trumps everything.

When talking about freemium models, you hear quotes that are bad. For example, Evernote had a < 2% conversion rate, and that is a wildly successful example. Dropbox has a rate of about 4%, and the average seems to be 1%. And that is for businesses who are focused on optimizing the freemium funnel. That takes a lot of time and effort, mind you.

I don’t think that there is a practical option for turning this around, and I say that as someone who would love it if that were at all possible.

time to read 3 min | 478 words

RavenDB, as of 4.0, requires that the document identifier will be a string. In fact, that has always been the requirement, but in previous versions, we allowed you to pretend that this isn’t the case. That has led to… some complexities, because people had a number id in their model, but inside RavenDB that was represented as a string, always.

I just got the following question:

In my entities, can I have the Id property of any type instead string to avoid primitive obsession? I would use a generic Id<tentity> type for ids. This type can be converted into string before saving in DB by calling ToString() and transformed from string into Id<tentity> (when fetching from DB) by invocation of static method like public Id<tentity> FromString(string id).

The short answer for this is that no, there is no way to do this. A document id in your model has to be a string.

The longer answer is that you can absolutely do this, but you have to understand the divergence of your entity model vs. the document model. The key is that RavenDB doesn’t actually require that your model would have an Id property. It is usually defined, because it makes things easier, but that isn’t required. RavenDB is perfectly happy managing the document key internally. Combine that with the ability to modify how documents are converted to entities, and you have a solution. Let’s look at the code…

And here is how it looks like:

image

The idea is that we customize a few things inside of RavenDB.

  • We tell the serializer that it should ignore the UserId property
  • We tell RavenDB that after creating an entity from the server, we should setup the Id property as we want it.
  • We do the same just before we store the entity in the server, just to be sure that we got the complete package.
  • We disable the usual identity generation logic for the documents we care about and tell RavenDB that it should ignore trying to set the identity property on the document on its own.

The end result is that we have an entity with a strongly typed identifier in our model. It took a bit of work, but not overly so.

That said, I would suggest that you should either have a string identifier property in your model or not have one at all (either option takes no code in RavenDB). Having an identifier and jumping through hoops like that tend to make for awkward experience. For example, RavenDB has no idea about this property, so if you need to support queries as well, you’ll need to extend the query support. It’s possible, but shows that there is additional complexity that can be avoided.

time to read 2 min | 256 words

RavenDB is highly concurrent distributed database. That means that we take the idea of race conditions, multiple that by network hiccups and then raise to the power of hair pulling. Now, we have architectural structure to help with a lot of that, but sometimes you need to write and verify what happens when a particular sequence of events in a five node cluster happens. For fun, you may need to orchestrate a particular order of operations across multiple disparate processes (sometimes on different machines). As you can imagine, that is… challenging.

I wanted to give you a hint of some of the techniques that we use to handle this. We have code that looks like this, sprinkled throughout our code base (Rachis is the name of our Raft cluster implementation):

This is where a leader connects to a follower to setup their relationship:

image

This is called during leader election:

image

These methods are implemented in the following manner:

image

In other words, they will set a ManualResetEvent that we setup as part of our testing infrastructure. The code isn’t even being run on production release, but it allow us to very carefully structure the exact sequence of events that we need to expose specific behaviors in the system.

time to read 2 min | 346 words

I run into this post, in which the author describe how they got ERROR 1000294 from IBM DataPower Gateway as part of an integration effort. The underlying issue was that he sent JSON to the endpoint in an order that it wasn’t expected.

After asking the team at the other end to fix it, the author got back an estimation of effort for 9 people for 6 months (4.5 man years!). The author then went and figured out that the fix for the error was somewhere deep inside DataPower:

Validate order of JSON? [X]

The author then proceeded to question the competency  / moral integrity of the estimation.

I believe that the author was grossly unfair, at best, to the people doing the estimation. Mostly because he assumed that unchecking the box and running a single request is a sufficient level of testing for this kind of change. But also because it appears that the author never considered once what is the reason this setting may be in place.

  • The sort order of JSON has been responsible for Remote Code Execution vulnerabilities.
  • The code processing the JSON may not do that in a streaming fashion, and therefor expect the data in a particular order.
  • Worse, the code may just assume the order of the fields and access them by index. Change the order of the fields, and you may reverse the Creditor and Debtor fields.
  • The code may translate the JSON to another format and send it over to another system (likely, given the mentioned legacy system.

The setting is there to protect the system, and unchecking that value means that you have to check every single one of the integration points (which may be several layers deep) to ensure that there isn’t explicit or implied ordering to the JSON.

In short, given the scope and size of the change:  “Fundamentally alter how we accept data from the outside world”, I can absolutely see why they gave this number.

And yes, for 99% of the cases, there isn’t likely to be any different, but you need to validate for that nasty 1% scenario.

time to read 2 min | 339 words

imageI used the term “Big Red Sales Button” in a previous post, and got a question about it. You can see an illustration of that on the right.

The Big Red Sales Button (BRSB from now on) is a metaphor used to discuss how sales can impact an organization. It is common for the sales team to run into new customer requirements. Some of them are presented as absolute requirements (they usually aren’t).

I have found that the typical response of the sales person at this point is to reply “of course we can do that”, go back to the office and hit the BRSB and notify the dev team that they have $tooShortTimeFrame to implement said feature.

In one very memorable case, I remember going over contract details and trying to figure out what we need to do there. Right there, in a minimum seven figures contract, there was a clause that explained what the core functionality of the system and the set of features that were required for it to be accepted.

Most of it was pretty normal business application, nothing too strange. But section 5.1.3.c was interesting. In it, in dense legalese, there was a requirement to solve the traveling salesman problem. To be fair, it wasn’t actually that problem, it was a scheduling problem and I used the traveling salesman as the name for it because it is easier than trying to explain NP complete issues to layman.

I’ll spoil the ending of this post and reveal that I did not solve an NP complete problem. I cheated like hell and actually solved the issue they had (if you limit the solution space, you drastically simplify the cost of a solution).

Sometimes, the BRSB is used for a good purpose. If you have something that can help close a major sale, and it isn’t outrageous to implement it, for example. But in many cases, it is open for abuse.

time to read 5 min | 968 words

I run across the follow on Twitter:

And that resonated very strongly with me, but from the other side. I actually talked about it quite a lot in the past. You should design your system so it can adapt more easily for changes in the business process.

About 15 years ago I was tasked with building a system for scheduling and managing at-home nursing aid. One of the key reasons for the system I was building was that management wanted to enforce their vision of how things should be on the organization and have chosen to do that by ensuring the new system will only allow things to happen That Way.

To my knowledge, they spent over three years speccing the system, but after looking at what we delivered (according to the spec, mind you), the people who were supposed to be using that revolted. I ended up working closely with a couple of employees that were supervisors in a local branch and would actually be using the system day in and day out. That project and others like it has taught me a lot about how to design a system that would enable rather than limit what you can do.

Let’s take a paper based system. A client comes in and wants to order something. They have no last name (Cher, Madonna, etc). The person handle the order leave that field empty and process the order. However, in a computerized system, that is going to fail validation and require you to reject the order. The person handling the order? Nothing they can do about it “the system won’t let me”. Admittedly, this is the simplest case that I can think of, but there are many cases where you have to jump through illogical hops to satisfy inflexible rules in the system (leave a comment with the most egregious example you have run into, please).

If you approach this properly, there are relatively simple solution. Implement a human level decision making in the process, you can do that by developing a human level AI or by grabbing a human. Let’s take a simple example from the past few months.

We just had the launch of RavenDB Cloud, during which we focused primarily on the backend operations, getting everything setup properly, monitoring, etc. At the same time, I have the sales team talking to customers. One of the things that kept popping up is that they wanted and needed a pretty flexible pricing system. For example, right now we have a pricing model for: On Demand, Yearly Contract and Upfront Yearly Contract. We are talking to customers that want 3 and 5 years contracts, but that isn’t implemented in the system. I have customers that want to pay by Credit Card (the main supported way), but others can only pay via a Purchase Order and some (still!) want to cut me a physical check.

There are a few ways you can handle something like this:

  • Tell customers that it is a credit card or the highway. Simplest to implement, kinda harsh on the bottom line.
  • Tell customers that of course we can do that, and then go to the dev team and press the Big Red Sales Button. I was the guy who responded the Big Red Sales Button, and I hated that.
  • Tell customers that we can handle their payment needs, go to the system and enable this:
    Manual intervention before charge

When this is checked, the usual payment processing will run, but before we actually charge the user, we insert a human in the loop. This allows us to apply any specific customizations to the payment. For example, one scenario would be to zero the amount remaining to be charged and send a Purchase Order to the finance department for that particular customer.

The key here is to take Jimmy’s notion, of looking into the actual usage of the system and not consider this to be a failing of the system. The system, when created, was probably pretty good, but it changed over time. And even if you re-create it perfectly today, it is going to change again tomorrow, necessitating the same workarounds. If you embrace this from the get go, you end up in a different place. Because now you can employ smarts in the system without having to deploy a new version.

In another case, we had a way for the user to say: “I want to circumvent these checks and proceed anyway”. We accepted the change, even if they violated some rules of the system. We would raise a flag and have another person review and authorize those changes. If the original user wasn’t supposed to use this, there was a… discussion on that and changes implemented in the policy. No need to go through the software for such things, and infinitely more flexible. We had a scenario where one of the integration points failed (it was down for a couple of weeks, IIRC), which was required step for processing an order.

The users could skip this step, proceed normally and then come back later, when the 3rd party integration was available again and do any fixes required. Compared that to an emergency change in production and then frantic fixes afterward when the system is up again. Instead, we have a baked process to handle outliers.

In other words, don’t try to make your humans into computers. As any techie will tell you, computers will do what you told them to. Humans will do what you meant*.

* Well, usually.

time to read 4 min | 749 words

Consider a business that needs to manage leasing apartments to tenants. One of the more important aspects of the business is tracking how much money is due. Because of the highly regulated nature of leasing, there are several interesting requirements that pop up.

The current issue is how do you tackle the baseline for eviction. Let’s say that the region that the business is operating under has the following minimum requirements for eviction:

  • Total unpaid debt (30 days from invoice) that is greater than $2,000.
  • Total overdue debt (30 – 60 days from invoice) that is greater than $1,000.
  • Total overdue debt (greater than 60 days from invoice) that is greater than $500.

I’m using the leasing concept here because it is easy to understand that the date ranges themselves are dynamic. We don’t want to wait for the next first of the month to see the changes.

The idea is that we want to be able to show a grid like this:

image

The property manager can then take action based on this data. And here is the raw data that we are working on:

image

It’s easy to see that this customer still has a balance of $175. Note that this balance is as of the July 9th, because we apply payments to the oldest invoice we have. The question now becomes, how can we turn this raw data into the table above?

This turn out to be a bit hard for RavenDB, because it is optimize to answer your queries fast, which means that having to do recalculation on each query (based on the current dates) is not easy. I already shown how to do this kind of task easily enough when we are looking at a single customer. The problem is that we want to have an overall view of the system, not just on a single customer. And ideally without it costing too much.

The key observation to handle this efficiently is RavenDB is to understand that we don’t need to actually generate the table above directly. We just need to get the data to a point where it is trivial to do so. After some thinking, I came up with the following desired output:

image

There idea here is that we are going to give both overall view on the customer’s account as well as details about its outstanding debts. The important detail that we need to understand is that this customer status is unlikely to grow too big. We aren’t likely to see customers that have debts that spans many years, so the size of this document is naturally bounded. The cost of going from this output to the table above is negligible and the process of doing so is obvious. So the only question now is how do we do this?

We are going to utilize RavenDB’s multi-map/reduce to the fullest here. Let’s first look at the maps:

There isn’t really anything interesting here. We are just outputting the data that we need for the second, more interesting stage, the reduce:

There is a whole bunch of stuff going on here, but leave aside how much JavaScript scares me, let’s dig into this.

The important parameters we have here are:

  • Debt
  • CreditBalance
  • RemainingBalance

We compute the CreditBalance by summing all the outstanding payments for the customer. We then gather up all the debts for the customer and sort them by date ascending. The next stage is to apply the outstanding credits toward each of the debts, erasing them from the list if they have been completely paid off. Along the way, we compute the overall remaining balance as well.

And that is pretty much it. It is important to understand that this code is recursive. In other words, if we have a customer that has a lot of invoices and receipts, we aren’t going to be computing this in one go over everything. Instead, we’ll compute this incrementally, over subsets of the data, applying the reduce function as we go.

Queries on this index are going to be fast, and applying new invoices and receipts is going to require very little effort. You can now also do the usual things you do with indexes. For example, sorting the customers by their outstanding balance or total lifetime value.

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 16 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}